Disable `pip` retry installing packages - Speedup installation - python-2.7

I have a custom pip repository which is used inside my company but not outside. Now every time I'm at home and want to install something eg. pip install pandas it is trying also the "company" repo. Even decreasing the timeout pip still tries multiple times which takes a long time.
What can I do to speed up?
[global]
timeout = 1
extra-index-url = http://foo.bar/packages
trusted-host = pypi-dev.foo.bar

pip will ignore settings in the pip.conf if you run it with the '--isolated' option.
pip install --isolated <package>

Related

Superset & Prophet (Forecast) not working

I am using Superset v1.5.1, and trying to use the Forecast option. I ran multiple commands, including:
pip --no-cache-dir install pystan==2.19.1.1 && pip install prophet
or
pip install lunarcalendar tqdm "pystan<3.0" && pip install "prophet>=1.0.1, <1.1
in my Superset container, but still, when I try to use Forecast, it doesn't work.
The error Superset shows is the following:
No results were returned for this query
as seen here
The strange part is that, whenever I don't use it, it works just fine, but the moment I turn on the Forecast, it just gives me 0 rows results and done. I guess it comes from Prophet, but I can't seem to figure out what is missing.
No matter what parameters, or data I give, it just returns 0 rows, every time.
No logs in Superset, install works fine.
Using Druid SQL to query Druid.
Any help would be appreciated.
Here in github it was solved - dependencies were messed up:
#ecederstrand: Preinstalling lunarcalendar, tqdm and pystan is necessary because prophet <= 1.0 has a bug that requires these packages to be available when running setup.py to build the package. This was fixed in prophet 1.1 but that version can't be installed along with superset 2.0 due to a version conflict on the holidays package. I just contributed a fix for that conflict in #21091.
In short, add this to your Dockerfile for Superset:
RUN pip install --upgrade pip
RUN pip install lunarcalendar tqdm "pystan<3.0" && pip install > "prophet>=1.0.1, <1.1"
And it will work.

pipenv updates all dependencies bringing breaking changes

I'm having problems with an app that uses Django. Everything is in a docker container, there is a pipfile and a pipfile.lock. So far, so good.
The problem is when I want to install a new depedency. I open the docker container shell, and I install the dependency with pipenv install <package-name>.
After installing the package, pipenv runs a command to update the pipfile.lock file and doing so updates all packages to their last version, bringing whit these updates a lot of breaking changes.
I don't understand why is this happening, I have all packages listed in my pipfile with ~=, this is suppose to avoid updating to versions that can break your app.
I'll give you an example, I have this dependency in my pipfile: dj-stripe = "~=2.4". But, in the pipfile.lock file, after pipenv runs the command lock, that depedency is updated to its last version (2.5.1).
What am I doing wrong?
Are you sure you're installing it within Docker? A common cause of pipfile.lock conflicts is installing a package locally instead of within Docker and then when the local environment syncs with Docker it will override your pipfile.lock.
Assuming you're using docker-compose, this is how I'm installing my packages:
docker-compose exec web pipenv install <package-name>
I discovered what my problem was.
I've been listing the dependencies like this: ~=2.4, I thought that was indicating not to update to 2.5 or greater, but that's not true, that only tells pipenv not to update to 3 or greater.
In order to stay in 2.4 version, I must specify the last number version, for example: ~=2.4.0
That way, I'm telling pipenv not to update from 2.4.

pip install returns nothing in command line

I am setting up a Windows 10 machine. After installing python 2.7.10 I tried to install some libraries, but when I do
pip install grpcio
The command prompt returns nothing. I added --verbose and the execution terminates after printing
Starting new HTTPS connection (1): pypi.python.org
I tried to update pip using pip install --upgrade pip but I get exactly the same error. Also tried to execute get-pip but this is what I get
1 location(s) to search for versions of pip:
* https://pypi.org/simple/pip/
Fetching project page and analyzing links: https://pypi.org/simple/pip/
Getting page https://pypi.org/simple/pip/
Found index url https://pypi.org/simple
Looking up "https://pypi.org/simple/pip/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
The problem is pretty much the same with easy_install.
I have connection to internet, pip is in Scripts folder and PATH contains the path to python2.7. --no-cache-data does not work. Do you have any ideas of what's wrong?
The problem was my Python installation. Had to do a nuclear uninstallation and then install again.

Some Confusion about easy_install without Root Access

Preface
I am so new to ssh/unix protocols that I hope I don't offend anybody.
Context
I am using the cores at my university, and do not have root access. Thus, when I install python modules, I resort to the answer on these two related stack overflow posts:
1) How to install python modules without root access?
2) How to install python packages without root privileges?
In the second post, Col Panic highly recommends getting pip or easy_install on the cores, and if they are not already there, 'you should politely ask the admins to add it, explaining the benefit to them (they won't be bothered anymore by requests for individual packages)."
Following that piece of advice, I request that the admin put easy_install on all the cores. They did and after some proverbial futzing around with export, PATH and PYTHONPATH, I was able to get numpy and scipy on the cores and import them into iPython environment.
Unfortunately, there was some problems with matplotlib related to this question: ImportError: No module named backend_tkagg
I thought I could just ignore this problem related to SUSE by pickling everything and then plotting it on my laptop.
My Problem
I really do need NetworkX. I wrote down some notes on all the small intricacies that I used to install the other packages my last go, but failed this time around. Maybe I am forgetting something that I did last time?
nemo01.65$ easy_install --prefix=/u/walnut/h1/grad/cmarshak/xdrive/xpylocal networkx
TEST FAILED: /u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages does
NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages
and your PYTHONPATH environment variable currently contains:
'/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python2.7/site-packages'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
My Attemps to Fix This
I really do networkx otherwise I have to adjust a bunch of my code that I want to put on the clusters.
1) I typed in:
export PYTHONPATH=/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages
into the bash environment. No luck...
2) I asked another grad for some help. He suggested I install pip via easy_install, which I did and then use:
pip install --user networkx
When I type in:
find ./local/lib/python2.7/site-packages/ | grep net
I get a ton of files that are all from the networkx library. Unfortunately, there is still some problems with dependencies.
THANK YOU IN ADVANCE FOR YOUR HELP. Really enjoy learning new things from your answers.
It looks like there are multiple versions of pip floating around (cf pip: dealing with multiple Python versions? ). Try installing pip using a specific version of easy_install. For example, this gave me a pip2.7
walnut.39$ easy_install-2.7 -U --user pip
Searching for pip
Reading https://pypi.python.org/simple/pip/
Best match: pip 1.5.6
Processing pip-1.5.6-py2.7.egg
pip 1.5.6 is already the active version in easy-install.pth
Installing pip script to /u/walnut/h1/grad/rcompton/.local/bin
Installing pip2.7 script to /u/walnut/h1/grad/rcompton/.local/bin
Installing pip2 script to /u/walnut/h1/grad/rcompton/.local/bin
Using /net/walnut/h1/grad/rcompton/.local/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg
Processing dependencies for pip
Finished processing dependencies for pip
walnut.40$
Then use pip2.7
walnut.40$ pip2.7 install --user networkx
Also, for non-root package installations, I've got the follow lines in my .bashrc:
export PYTHONPATH=$PYTHONPATH:$HOME/.local/lib/python2.7/site-packages
export PATH=$PATH:~/.local/bin

How to speed jenkins build process while installing requirements using pip

I am using Jenkins CI for my django project. For Django-Jenkins integration I am using the django-jenkins app. In the build step of Jenkins I create a fresh virtualenv and install all the dependencies for each build using requirements file. However, this makes build extremely slow because a fresh copy of all the dependencies must be downloaded from a PyPI mirror, even if nothing has changed in the dependencies since the last build. So I started using the local caching built-in to pip by setting the PIP_DOWNLOAD_CACHE environment variable. But the whole build process is still painfully slow and takes more than 10 minutes. Is there any way I could speed up the whole process? Maybe by caching the compiled dependencies or something else?
Just only install a fresh virtualenv if your requirements.txt file changes. This can be done easily with some shell commands. We are doing something similar in one of our projects. In a Jenkins shell window we have (after svn up):
touch changed.txt
stat -c %Y project/requirements.txt > changed1.txt
diff -q changed.txt changed1.txt || echo "DO YOUR PIP --upgrade HERE!"
Why bother creating a fresh virtualenv each time you build? You should be able to create just one and simply activate it with . /path/to/venv/bin/activate as an 'Execute shell script' build step (assuming the use of linux here). Then, if you need to install a new dependency, you can activate the venv on your own and pip install the new package.