Setting up conda environment with really old python version - python-2.7

I am writing some python scripts for a computer that, for various reasons, is stuck at python v2.7.5 (upgrading is not an option).
I would like to set up a conda environment on my desktop to test the scripts out before transferring them to the production machine.
$ conda create --name py275 python=2.7.5
Collecting package metadata (current_repodata.json): done
Solving environment: failed
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- python=2.7.5
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
When I do conda search python, the oldest version available to me is 2.7.13, but if I go to anaconda.org and do a search, the files for python 2.7.5 are there and they appear to be in the main channel.
How can I point conda to those files to allow me to create the proper environment?

After searching long and hard for the answer, I found it after posting here. Python is in the anaconda channel, not the main channel.
$ conda create --name py275 -c anaconda python=2.7.5

Related

How to make GitLab Windows shared runners to build faster?

Background
I have a CI pipeline for a C++ library I've been developing. So far, I can distribute this lib to Linux and Windows systems. Since I use GitLab to build, test and package my lib, I'd like to have my Windows builds running faster and I have no clue on how to do that.
Currently, I use the following script for my Windows builds:
.windows_template:
tags:
- windows
before_script:
- choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
- choco install python --pre -y
- choco install git -y
- $env:ChocolateyInstall = Convert-Path "$((Get-Command choco).Path)\..\.."; Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"; refreshenv
- python -m pip install --upgrade pip
- pip install conan monotonic
The problem
Any build with the script above can take up to 10 minutes; worse: I have two stages, each one taking the same amount of time. This means that my whole CI pipeline will take 20 minutes to finish because of slowness in Windows builds.
Ideal solution
EVERYTHING in my before_script can be cached or stored as an image. I only need some hints on how to do it properly.
Additional information
I use the following tools for my builds:
CMake: to support my building process;
Python3: to test and build packages;
Conan (requires Python3): to support the creation of packages with several features, as well as distribute them;
Git: to download Googletest in CMake configuration step This is already provided in the cookbooks - I might just remove this extra installation step in my before_script;
Googletest (requires Python3): testing library;
Visual Studio DEV Tools: to compile the library This is already in the cookbooks.
Installing packages like this (whether it's OS packages though apt-get install... or pip, or anything else) is generally against best practices for CI/CD jobs because every job that runs will have to do the same thing, costing a lot of time as you run more pipelines, as you've seen already.
A few alternatives are to search for an existing image that has everything you need (possible but not likely with more dependencies), split up your job into pieces that might be solved by an image with just one or two dependencies, or create a custom docker image to use in your jobs. I answered a similar question with an example a few weeks ago here: "Unable to locate package git" when running GitLab CI/CD pipeline
But here's an example Dockerfile with Windows:
# Dockerfile
FROM mcr.microsoft.com/windows
RUN ./install_chocolatey.sh
RUN choco install cmake.install -y --installargs '"ADD_CMAKE_TO_PATH=System"'
RUN choco install python --pre -y
RUN choco install git -y
...
The FROM line says that our new image extends the mcr.microsoft.com/windows base image. You can extend any image you have access to, even if it already extends another image (in fact, that's how most images work: they start with something small, like a base OS installation, then add things needed for that package. PHP for example starts on an Ubuntu image, then installs the necessary PHP packages).
The first RUN line is just an example. I'm not a Windows user and don't have experience installing Chocolatey, but you'd do here whatever you'd normally do to install it locally. The rest are for installing whatever else you need.
Then run
docker build /path/to/dockerfile-dir -t mygroup/mytag:version
The path you supply needs to be the directory that contains the Dockerfile, not the Dockerfile itself. The -t flag sets the image's tag after it's built (though you can do that with a separate command, docker tag too).
Next, you'll have to log into whichever registry you're using (Docker Hub (https://docs.docker.com/docker-hub/repos/), Gitlab Container Registry (https://docs.gitlab.com/ee/user/packages/container_registry/), a private registry your employer may support, or any other option.
docker login my.docker.hub.com
Now you can push the image to the registry:
docker push my.docker.hub.com/mygroup/mytag:version
You'll have to review the information in the docs about telling your Gitlab runner or pipelines how to authenticate with the registry (unless it's Public on Docker Hub or you use the Gitlab Container Registry) https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
Once all that's done, you can use your new image in your CI jobs, and everything we put into the image will be ready to use:
.windows_template:
image: my.docker.hub.com/mygroup/mytag:version
tags:
- windows
...

Is it possible to create a standalone file to import a python library created with pybind?

I hope I'm clear in my question, if not please tell me.
I am using OpenImageIO's python bindings (pybind11) for some scripts that will run on hundreds of computers. Unfortunately it took me a lot of time to install OpenImageIO and make it work with my Python2 installation. I'd like to know if there's a way to create a file/folder that I could send to other computers so they can install the Python module simply with "pip install file/folder".
Thanks ofr your help
Are you running the scripts on a compute cluster with a shared filesystem? If so, then there's no need to create separate installations of python for each machine. The simplest solution is to create ONE python environment in a location that is accessible by all of your machines. An easy way to create a Python environment in a non-system location is to use Miniconda. Install it to a shared (network) location, and create an environment for all of your machines to use.
If your machines do NOT have a shared file system, then you'll need to somehow reproduce the environment on all of them independently. In this case, there's no simple way to do that with pip.**
But if you can use conda instead, then there's a very straightforward solution. First, install everything you need into a single conda environment. Then you have a choice: You can export the list of conda packages, or simply copy the entire conda environment directory to the other machines.
OpenImageIO is available from the conda-forge channel, a community-developed repository of conda packages. The name of the package is py-openimageio. They have stopped updating the python-2.7 version, but the old versions are still available.
Here's how to do it.
Install Miniconda-2.7
Create a new environment with python 2.7, OpenImageIO, and any other packages you need:
conda create -n jao-stuff -c conda-forge py-openimageio python=2.7
conda activate jao-stuff
python -c "import OpenImageIO; print('It works!')"
Do ONE of the following:
a. Export the list of packages in your environment:
conda env export -n jao-stuff -f jao-stuff-packages.yaml
Then, on the other machines, install Miniconda, then create the environments using the package list from the previous step:
conda create -n jao-stuff --file jao-stuff-packages.yaml
OR
b. Just copy all of the files in the environment to the other machines, and run them directly. Conda environments are self-contained (except for a few low-level system libraries), so you can usually just copy the whole thing to another machine and run it without any further install step.
tar czf jao-stuff.tar.gz $(conda info --prefix)/envs/jao-stuff
On the other machine, unpack the tarball anywhere and just run the python executable it contains:
tar xzf jao-stuff.tar.gz
jao-stuff/bin/python -c "import OpenImageIO; print('It works!')"
**That's because OpenImageIO is a C++ project, with several C++ dependencies, and they don't provide binaries in the wheel format. I don't blame them -- pip is not well suited to this use-case, even with wheels. Conda, on the other hand, was designed for exactly this use-case, and works perfectly for it.

Check if package available in python3

I have a project in Python2.7 and want to port it to Python3.6.
I want to build some kind of dependency tree where I can see which package is available in Python3.6. But don't know why how to check it without trying to install it to Python3.6 environment.
For example: boto-rsync is not available for Python3.6, but redis do.
And I need some advice where to start.
Use pip freeze in your both python2 and python3 environments and redirect them to a text files to see what all packages are installed.
Example:
pip freeze >> python2.txt
Click here for pip documentation.

launching cassandra cqlsh python not found

I am trying to install cassandra version 2.2.0 and I found the compatible python version for it is 2.7.10 then I installed it.
when I type in terminal
python2.7 --version
Python 2.7.10
but when I launch cassandra server and want to start cassandra query language shell by typing
root#eman:/usr/local/cassandra# bin/cqlsh
bin/cqlsh: 19: bin/cqlsh: python: not found
how could I fix this issue
thanks in advance
For centos 8 and other similarly:
Install python 2.7
Then, prior to invoking cqlsh, run:
sudo alternatives --set python /usr/bin/python2
It seems that python is not installed on your machine (for whatever reason).
cqlsh shells out to python (in a rather strange way): https://github.com/spiside/cqlsh/blob/6f5b680fed2e48e37107fd1da272e351e5ac257d/cqlsh#L1-L30
Unrelated to this stackoverflow issue, I attempted to fix (and probably fixed) this in the latest version of cqlsh: https://github.com/spiside/cqlsh/commit/a880445ec9d05cfa552928d5a88d1457640456b6
If you can upgrade cqlsh it may fix this.
If you can't upgrade cqlsh any of the following things should fix this:
- If you're on an debian-like system apt-get install python-minimal -- this provides the /usr/bin/python file that seems to be missing (for whatever reason)
- If your package manager has a package which provides the /usr/bin/python symlink, install that
- Otherwise, set up a symlink that's on your path, for example ln -sf /usr/bin/python2.7 /usr/local/bin/python

Some Confusion about easy_install without Root Access

Preface
I am so new to ssh/unix protocols that I hope I don't offend anybody.
Context
I am using the cores at my university, and do not have root access. Thus, when I install python modules, I resort to the answer on these two related stack overflow posts:
1) How to install python modules without root access?
2) How to install python packages without root privileges?
In the second post, Col Panic highly recommends getting pip or easy_install on the cores, and if they are not already there, 'you should politely ask the admins to add it, explaining the benefit to them (they won't be bothered anymore by requests for individual packages)."
Following that piece of advice, I request that the admin put easy_install on all the cores. They did and after some proverbial futzing around with export, PATH and PYTHONPATH, I was able to get numpy and scipy on the cores and import them into iPython environment.
Unfortunately, there was some problems with matplotlib related to this question: ImportError: No module named backend_tkagg
I thought I could just ignore this problem related to SUSE by pickling everything and then plotting it on my laptop.
My Problem
I really do need NetworkX. I wrote down some notes on all the small intricacies that I used to install the other packages my last go, but failed this time around. Maybe I am forgetting something that I did last time?
nemo01.65$ easy_install --prefix=/u/walnut/h1/grad/cmarshak/xdrive/xpylocal networkx
TEST FAILED: /u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages does
NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages
and your PYTHONPATH environment variable currently contains:
'/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python2.7/site-packages'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
My Attemps to Fix This
I really do networkx otherwise I have to adjust a bunch of my code that I want to put on the clusters.
1) I typed in:
export PYTHONPATH=/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages
into the bash environment. No luck...
2) I asked another grad for some help. He suggested I install pip via easy_install, which I did and then use:
pip install --user networkx
When I type in:
find ./local/lib/python2.7/site-packages/ | grep net
I get a ton of files that are all from the networkx library. Unfortunately, there is still some problems with dependencies.
THANK YOU IN ADVANCE FOR YOUR HELP. Really enjoy learning new things from your answers.
It looks like there are multiple versions of pip floating around (cf pip: dealing with multiple Python versions? ). Try installing pip using a specific version of easy_install. For example, this gave me a pip2.7
walnut.39$ easy_install-2.7 -U --user pip
Searching for pip
Reading https://pypi.python.org/simple/pip/
Best match: pip 1.5.6
Processing pip-1.5.6-py2.7.egg
pip 1.5.6 is already the active version in easy-install.pth
Installing pip script to /u/walnut/h1/grad/rcompton/.local/bin
Installing pip2.7 script to /u/walnut/h1/grad/rcompton/.local/bin
Installing pip2 script to /u/walnut/h1/grad/rcompton/.local/bin
Using /net/walnut/h1/grad/rcompton/.local/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg
Processing dependencies for pip
Finished processing dependencies for pip
walnut.40$
Then use pip2.7
walnut.40$ pip2.7 install --user networkx
Also, for non-root package installations, I've got the follow lines in my .bashrc:
export PYTHONPATH=$PYTHONPATH:$HOME/.local/lib/python2.7/site-packages
export PATH=$PATH:~/.local/bin