Managing conda env in cross platform environment [duplicate] - build

This question already has answers here:
How to share conda environments across platforms
(6 answers)
Closed 1 year ago.
My project supposed to run on cross platform environment (Mac, Win, Linux).
I've created a conda env that manage our dependencies for an easy setup.
I want to ensure that everyone that want to update the enn could do that, however when I try to export the env from linux to yml file, it couldnt be install properly on Win or Mac and vise versa.
I've already tried to do the regular stuff:
1.
conda env export > env.yml
conda env create --name -f env.yml
2.
conda env export --no-builds > env.yml
3.
https://medium.com/#Amet13/building-a-cross-platform-python-installer-using-conda-constructor-f91b70d393
4.
https://tech.zegami.com/conda-constructor-tutorial-make-your-python-code-easy-to-install-cross-platform-f0c1f3096ae4
5.
https://github.com/ESSS/conda-devenv/blob/master/README.rst
non of the above give me the right answer... some of the tutorials I've attached might help, but I didn't succeed to implement them correctly, and they didn't contain some important information for finishing the tutorial properly.
for instance:
Regarding 3/4 - It didn't explain how to create the yml file that should construct the env.
I understood that conda supposed to work on cross platform env...
It would be great if someone could help me with that.

Conda Envs are Not Inherently Cross-Platform
Sorry, but what you're asking for is simply not a thing. Conda can serialize an environment's package information to a YAML (great for reproducibility), but it can't guarantee that it will be cross-platform. In fact, many packages, especially ones with non-Python code, require different underlying build tools as dependencies, so what you're asking for will never be satisfied.
Explicit Specs Only
The closest you can get these days is to limit your environment.yaml to only include explicit specs that have gone into creating your environment by using the --from-history flag. This feature requires Conda v4.7.12 or later.
conda env export --from-history > environment.yaml
This will generate a YAML that only includes the packages that have been explicitly requested in the history of the env, e.g., if your history goes...
conda create -n foo python=3.7 numpy
conda install -n foo pandas scikit-learn
Then the result of conda env export -n foo --from-history will be something like
name: foo
channels:
- defaults
dependencies:
- python=3.7
- numpy
- pandas
- scikit-learn
prefix: /your/conda/dir/envs/foo
This way, you can leave out all the other dependencies that may turn out to be platform-specific.
I'm Still Seeing a Ton of Packages?!
I've noticed that if one ever uses the --update-deps flag in an env, it adds every dependency to being an explicit spec. This is rather unfortunate. If this is the case, I'd suggest recreating the env using your legitimate specs and avoid that flag in the future. Searching through your command history might be useful in compiling that legitimate spec list.

Related

Python2 required as dependencie but already installed

First of all, sorry for my english and sorry for taking your time, I'm pretty sure the answers are online but it seems like I can't find the right keywords for this issue.
The problem: Python2 is installed but still asked as a dependency for every python2 module I want to install with my package manager.
I built python2 from source on Archlinux in a proot environment
(because I use termux on a non rooted phone and it's probably why yay did not worked as expected to install python2) and I think I did it well because "python2" open the python command line and "python2 -V" return me "Python 2.7.18", i can execute python scripts with it, etc.
I built python by downloading and uncompressing python2 from python.org, then in the uncompressed file ./configure --enable-optimization , make -s and make install.
I'm a noob so i don't know but i probably just need a way to handle python2 with pacman or a way to tell pacman that python2 is indeed installed.
repo.
I know to handle manualy built pkgs with pacman, but not software built from source. So i'v found a PKGBUILD for python2 but (again, probably because of the proot) when i use makepkg here what's happen :
[...
...
...]
==> Extracting sources...
-> Extracting Python-2.7.18.tar.xz with bsdtar
==> Starting prepare()...
bsdtar: Removing leading '/' from member names
patching file Makefile.pre.in
patch: setting attribute security.selinux for security.selinux: Permission denied
==> ERROR: A failure occurred in prepare().
Aborting...
So if anyone know how could i make makepkg works as intended or how could i tell pacman that python2 is already installed, it would totaly make my day.
PS : I know python2 is deprecated and as it's not updated anymore the security is getting worse and worse, but it's not for my main setup so don't worry. I also think i could install modules manually but it is not something i wish to do since i'd like to install the all BlackArch repo.

heroku python buildpack pip install not adding the entry-points.txt file when installing

My runtime is python-3.7.5
I have an Django reusable app with an entry point in setup.py defined as:
setup = (
...
entry_points={'my.group': 'foo = bar'},
)
That allows me to use pkg_resources.iter_entry_points(group="my.group", name=None) to get a list of plugins.
I didn't know that until I had this bug, but it seems to rely on a entry_points.txt file that gets installed in the egg-info.
This entry_points.txt file seems to be missing when I push to heroku. I did a heroku run bash and:
~/.heroku/python/lib/python3.7/site-packages/m_package.egg-info $ ls
dependency_links.txt installed-files.txt PKG-INFO SOURCES.txt top_level.txt
but when I uninstall it and install it manually, and I recheck:
~/.heroku/python/lib/python3.7/site-packages/my_package.egg-info $ ls
dependency_links.txt entry_points.txt installed-files.txt PKG-INFO requires.txt SOURCES.txt top_level.txt
Am I missing something that the buildpack does?
The only extra thing to add is that I'm using https://github.com/timshadel/heroku-buildpack-github-netrc.git to get Https authentication in git, (my requirements.txt has some packages from private github repos) but I don't think that this should matter at all.
After messing with the official django buildpack, I realized it's just caching the packages, and since I updated my_package's code but not its version, it was not picking up the new library, hence no entry points. When I was doing pip install by hand on the heroku instance, it was picking the right library.
Good to know anyway, so keeping the question and the answer if anyone has the same problem one day.

Pabot - Unable to run parallel robotframework tests

So, I'm working on a robotframework test project, and the goal is to run several test suites in parallel. For this purpose, pabot was chosen as the solution. I am trying to implement it, but with little success.
My issue is: after installing Pabot (which, I might say, I did by cloning the project and running "setup.py install", instead of using pip, since the corporate proxy I'm behind has proven an obstacle I can't overcome), I created a new directory in the project tree, moved some suites there, and ran:
pabot --processes 2 --outputdir pabot_results Login*.robot
Doing so results in the following error message:
2018-10-10 10:27:30.449000 [PID:9676] [0] EXECUTING Suites.LoginAdmin
2018-10-10 10:27:30.449000 PID:400 EXECUTING Suites.LoginUser
2018-10-10 10:27:30.777000 PID:400 FAILED Suites.LoginUser
2018-10-10 10:27:30.777000 [PID:9676] [0] FAILED Suites.LoginAdmin
WARN: No output files in "pabot_results\pabot_results"
Output:
[ ERROR ] Reading XML source '' failed: invalid mode ('rb') or filename
Try --help for usage information.
Elapsed time: 0 minutes 0.578 seconds
Upon inspecting the stderr file that was generated, I have this message:
Traceback (most recent call last):
File "C:\Python27\Lib\site-packages\robotframework-3.1a2.dev1-py2.7.egg\robot\running\runner.py", line 22, in
from .context import EXECUTION_CONTEXTS
ValueError: Attempted relative import in non-package
Apparently, this has to do with something from the runner.py script, which, if I'm not mistaken, came with the installation of robotframework. Since manually modifying that script does not seem to me the optimal solution, my question is, what am I missing here? Did I forget to do anything while setting this up? Or is this an issue of compatibility between versions?
This project is using Maven as the tool to manage dependencies. The version I am running is 3.5.4. I am using a Windows 10, 64bit system; I have Python 2.7.14, and Robot Framework 3.1a2.dev1. The Pabot version is 0.44. Obviously, I added C:\Python27 and C:\Python27\Scripts to the PATH environment variable.
Edit: I am also using robotframework-maven-plugin version 1.4.0.8, if that happens to be relevant.
Edit 2: added the error messages in text format.
I believe I've come across an issue similar when setting up parallel execution on my machine. Firstly I would confirm that pabot is installed using pip show robotframework-pabot.
Then you should define the directory your results are going to using -d.
I then modified the name of the -o to Output.xml to make it easy to identify.
This is a copy of the code I use. Runs optimally with 8 processes
pabot --processes 8 -d results -o Output.xml Tests
Seems that you stumbled on a bug in the prerelease version of robot framework (3.1a2.dev1).
Please install a release version of robot framework. For example 3.0.4.
Just in case anyone happens to stumble upon this issue in the future:
Since I can't use pip, and I tried a good deal of workarounds that eventually made things more unstable, I ended up saving my project and removing everything Python-related from my system, so as to allow me to install everything from scratch. In a Windows 10, 64bit system, I used:
Python 2.7.14
wxPython 2.8.12.1, win64, unicode, for py27
setuptools 40.2.0 (to allow me to use the easy_install command)
Robot Framework 3.0.4
robotremoteserver 1.1
Selenium2Library 3.0.0
and Pabot version 0.45.
I might add that, when installing the Selenium2Library the way I described above, it eventually tries to download some things from the pip repositories - which, if you have a proxy, will cause you trouble. I solved this problem by browsing https://pypi.org/simple/selenium/, manually downloading the 2.53.6 .tar.gz file, then extracting it and running setup.py install on the command line.
PS: Ideally, though, anyone should be able to use proxy settings from the command line (--proxy http://user:password#server:port) to get pip and then use it; however, for some reason, probably related to network security configurations that I didn't want to lose time with, this didn't work in my case.

Unable to install Anaconda environment containing anaconda 4.0.0 np110py27_0

In Anaconda I am trying to create an environment using an environment.yml file which begins with the lines:
name: mytest
dependencies:
- anaconda=4.0.0=np110py27_0
However when trying to create the environment, I get the error:
Fetching package metadata .........
Solving package specifications: ....
Error: The following specifications were found to be in conflict:
- anaconda 4.0.0 np110py27_0
Use "conda info <package>" to see the dependencies for each package.
I encountered no problems when I did this seven days ago, but when I tried this yesterday I got the error.
I am running on Windows 7 64-bit as administrator, Anaconda 2.2.0 (Python 2.7 version). The "conda list" output includes conda 4.1.11 and conda-env 2.5.2.
To try to isolate the error, I installed Miniconda2 on a different 64-bit Windows 7 computer (as administrator) that had never had any Anaconda/Miniconda installed before. This is the most recent 64-bit Python 2.7 series (Miniconda2-4.1.11-Windows-x86_64.exe).
But trying to install anaconda=4.0.0=np110py27_0, either to a new environment or to the root environment, both produce the same error I received before:
C:\>conda install anaconda=4.0.0=np110py27_0
Fetching package metadata .........
.Solving package specifications: ....
The following specifications were found to be in conflict:
- anaconda 4.0.0 np110py27_0
Use "conda info <package>" to see the dependencies for each package.
C:\>conda create --name test400 anaconda=4.0.0=np110py27_0
Fetching package metadata .........
.Solving package specifications: ....
The following specifications were found to be in conflict:
- anaconda 4.0.0 np110py27_0
Use "conda info <package>" to see the dependencies for each package.
How can I determine what is causing the conflict, and how could I resolve it, given that conda is not naming a second package in its error message? I have seen responses to other "specifications in conflict" questions in which the answer is often "Install the problematic package to a separate python environment", but in this case the new environment could not be created with the package. Starting from a clean Miniconda install did not work either. I suspect something has changed in the Anaconda repository (which would be consistent with the original environment.yml working in the past but not now), but how would I determine if this is the underlying issue?
Thanks.
The underlying issue was a temporary error in the https://repo.continuum.io/pkgs/free/win-64/repodata.json file, which has since been fixed.
For reference for anyone investigating Anaconda dependency conflicts, here are the details of the investigation, and the workaround for this case:
The cause:
The repodata.json file (linked above) is essentially a 'master list' of the dependencies of the various libraries in https://repo.continuum.io/pkgs/free/win-64/. The "conda" command uses this repodata.json file.
While the problem was occurring, the repodata.json file incorrectly listed "_nb_ext_conf" as a dependency for each version of ipywidgets. (The /info/index.json file inside "ipywidgets-4.1.1-py27_0.tar.bz2" did not list "_nb_ext_conf" as a dependency, however I think newer versions of ipywidgets require it.)
The "_nb_ext_conf-0.2.0-py27_0.tar.bz2" and "_nb_ext_conf-0.3.0-py27_0.tar.bz2" files list "notebook >=4.2.0" as a dependency in their info/index.json files.
The info/index.json file in anaconda-4.0.0-np110py27_0.tar.bz2 file (which is used when you specify "anaconda=4.0.0=np110py27_0" in an environment.yml) lists "ipywidgets 4.1.1 py27_0" as a dependency.
Due to the temporary problem in repodata.json, this "ipywidgets 4.1.1 py27_0" caused conda to think "_nb_ext_conf" needed to be installed, thus causing conda to think "notebook >=4.2.0" also needed to be installed.
But the info/index.json file in anaconda-4.0.0-np110py27_0.tar.bz2 file also specifies that the specific version "notebook 4.1.0 py27_2" must be installed.
The conflicting requirements for "notebook" versions (4.1.0 and >=4.2.0) caused the "specifications were found to be in conflict" error.
The workaround:
First, remove the line "- anaconda=4.0.0=np110py27_0" from the environment.yml file.
Next, replace that line in environment.yml with every library listed in the "depends" section of the info/index.json file from anaconda-4.0.0-np110py27_0.tar.bz2. (Remove the quotation marks, replace the spaces with equals signs, etc. to convert the .json syntax to the environment.yml syntax.)
Finally, remove the "- notebook=4.1.0=py27_2" line from this list.
This new environment.yml file will now list every library which would have been installed by "anaconda=4.0.0=np110py27_0", with the exception of "notebook", but "notebook" will get installed anyway due to the "notebook >=4.2.0" requirement in "_nb_ext_conf" due to "ipywidgets", and/or the "notebook" requirement in "ipywidgets" itself.
Investigative tools:
The command "conda info anaconda=4.0.0=np110py27_0" gives the list of libraries required by the specified package, according to repodata.json. I put this list of libraries into a temporary_environment.yml file. Attempting to create an environment from that temporary_environment.yml file caused conda to specify that "notebook" was involved in the conflict, which gave the hint to try omitting "notebook".
Running "conda info" lists all the libraries currently installed in the active environment. The output for the environment created by temporary_environment.yml was compared to the output from an environment from a computer where "anaconda=4.0.0=np110py27_0" had previously installed successfully. This highlighted "_nb_ext_conf" as one difference.
I created a batch file which ran "conda info" for every library listed in anaconda=4.0.0=np110py27_0, and I looked for instances of "notebook" and "_nb_ext_conf" in the output. This pointed to "ipywidgets" as a suspect.

"No module named yum error" with python2.7 when using Ansible on Fedora

I m trying to install packages through Ansible but getting No module named yum error with python2.7. Anyone ever faced this issue before?
Fedora Core 3? No wonder! That release is from 2004. My memory of over a decade ago is a little hazy — it appears that yum was available in that release, but I think "up2date" was still the official higher-level package manager.
But, also, the yum version is 2.x, and it's packaged to work with the system Python of the time, which was Python 2.3. It's highly unlikely that the ansible module will work even with kludges. If you really need to install packages there, you will need to find an alternate way *. Plus, the mirror infrastructure for FC3 is no longer standing — you'll at the very least need to point to the archive.
I do, however, encourage you to use a newer version of Fedora if at all possible, not just for the convenience of things working (though, there's that) but because there are numerous known exploits which will work on FC3 — I would be very hesitant about having any Linux distro which reached end of life in 2006 on a network now. (Disclaimer: I happen to work on the current Fedora).
* alternate way: Easiest approach is probably skipping the yum module and just having ansible run the yum command directly.
My problem was I had set ansible_python_interpreter to something other than the default python since I needed a virtualenv python. The virtualenv python did not have the yum module installed.
Executing the yum statement before I set the ansible_python_interpreter fact fixed this for me. I could have also set the fact back to the original value ( usually /usr/bin/python ) if this wasn't an option.
For those looking to set this fact, you can use:
- set_fact:
ansible_python_default_interpreter: "{{ ansible_python_interpreter }}"
ansible_python_interpreter: "{{ virtualenv_dir }}/bin/python"
Where {{ virtualenv_dir }} is the directory in which you used the pip module to install the virtual env as described at http://docs.ansible.com/ansible/pip_module.html .
And then to set it back:
- set_fact:
ansible_python_interpreter: "{{ ansible_python_default_interpreter }}"
This goes by whatever is returned by sys.executable , which is usually /usr/bin/python .
For the curious, this is the current code block at ansible/lib/ansible/inventory/__init__.py lines 461-462 (subject to change!):
if "ansible_python_interpreter" not in new_host.vars:
new_host.set_variable("ansible_python_interpreter", sys.executable)
I found the pro tip about the interpreter fact in this thread.
https://groups.google.com/forum/#!msg/ansible-project/yNWKzV5F-QU/e-vkWJKf6tQJ
A module is a file containing Python definitions and statements. The file name is the module name with the suffix .py appended.
Reference: https://docs.python.org/2/tutorial/modules.html
If you are getting an error that states "No module named yum" this is a result of there not being a yum.py file.
Ansible and YUM are not supported on Windows systems.
Reference: http://docs.ansible.com/intro_installation.html