I had an old version of django-bouncer that required hashcompat, which is now deprecated. Since I was getting errors telling me this, I did pip uninstall django-bouncer, then installed the version upgraded for Django 1.6 (it uses hashlib instead of hashcompat) using pip install https://github.com/shelfworthy/django-bouncer/archive/master.tar.gz (I also re-added it to my requirements.txt file).
Locally, this is working fine. However, when I push to Heroku, I'm still getting the error "No module named hashcompat."
I tried doing a git push heroku master --force, but that didn't resolve the problem. Then I reset the app by doing heroku repo:reset -a <myappname>, followed by did a new git push heroku master. Unfortunately, I'm still getting the error on my Heroku app.
How can I make Heroku get the upgrade of django-bouncer?
What you should do is this:
Firstly, install django-bouncer's latest release locally on your laptop (you can do this by running pip install -U django-bouncer.
Next, figure out what the latest version is on your laptop, by running: pip freeze | grep django-bouncer. You should see something like: django-bouncer==x.x.x.
Lastly, edit your project's requirements.txt file and add django-bouncer=x.x.x, then push this change to Heroku. This will force Heroku to detect what specific version of django-bouncer is required, and install it for you.
Hope that helps!
It's possible you are running afoul of Heroku's package cache; it sees django-bouncer is already installed and doesn't bother to install it again. But, you can't uninstall it either.
I recall there's a bit of a hack to get around this: Heroku will wipe out its package cache if you change the version of Python you are using. So if you are using, say, 2.7.6, edit your runtime.txt to change it to python-3.4.0. If you are already using a 3.x branch, do the opposite. It's not important that your application actually works on the version you're changing it to -- deploy once, and change it back. That should wipe out your package cache entirely, at which point you'll be good to go.
Related
I'm having problems with an app that uses Django. Everything is in a docker container, there is a pipfile and a pipfile.lock. So far, so good.
The problem is when I want to install a new depedency. I open the docker container shell, and I install the dependency with pipenv install <package-name>.
After installing the package, pipenv runs a command to update the pipfile.lock file and doing so updates all packages to their last version, bringing whit these updates a lot of breaking changes.
I don't understand why is this happening, I have all packages listed in my pipfile with ~=, this is suppose to avoid updating to versions that can break your app.
I'll give you an example, I have this dependency in my pipfile: dj-stripe = "~=2.4". But, in the pipfile.lock file, after pipenv runs the command lock, that depedency is updated to its last version (2.5.1).
What am I doing wrong?
Are you sure you're installing it within Docker? A common cause of pipfile.lock conflicts is installing a package locally instead of within Docker and then when the local environment syncs with Docker it will override your pipfile.lock.
Assuming you're using docker-compose, this is how I'm installing my packages:
docker-compose exec web pipenv install <package-name>
I discovered what my problem was.
I've been listing the dependencies like this: ~=2.4, I thought that was indicating not to update to 2.5 or greater, but that's not true, that only tells pipenv not to update to 3 or greater.
In order to stay in 2.4 version, I must specify the last number version, for example: ~=2.4.0
That way, I'm telling pipenv not to update from 2.4.
I am deploying a Django app using Heroku.
When I run
git push heroku master
in my terminal I get the following error:
Could not find a version that satisfies the requirement command-not-found==0.3"
When I run
sudo apt-get install command-not-found
I find that command-not-found is version 20.04.2. However, pip freeze tells me command-not-found is version 0.3.
command-not-found doesn't seem to exist on PyPI, but it is a package in Ubuntu and Debian repositories. It doesn't look like anything that your application should depend on, and it certainly doesn't belong on Heroku.
I suspect
you're trying to create your dependencies file after the fact, by simply doing pip freeze > requirements.txt, and
that you're either not working in a virtual environment or you created your virtual environment with system packages.
This is an antipattern that will cause several packages that your application doesn't actually need to be included in your requirements.txt. In this case it is even including Python packages that come from system packages and aren't meant to be installed from PyPI. Your requirements.txt should contain only your actual dependencies.
Instead of creating it with pip freeze after the fact, add things to that file before, and install them into your virtual environment with the same pip install -r requirements.txt command that you'll use in production. I also very strongly urge you to use a virtual environment.
In this case, I suggest you edit your requirements.txt and remove anything you don't actually need, commit, and redeploy.
PyCharm seems to ignore the configured virtualenv,
and use the base interpreter instead.
In my project at /Users/janos/dev/git/github/bashoneliners I have a virtualenv subdirectory, strictly with my project's dependencies installed in it:
$ . virtualenv/bin/activate
(virtualenv)janos at kronos in ~/dev/git/github/bashoneliners on master
$ pip -V
pip 1.5.6 from /Users/janos/dev/git/github/bashoneliners/virtualenv/lib/python3.4/site-packages (python 3.4)
(virtualenv)janos at kronos in ~/dev/git/github/bashoneliners on master
$ pip freeze
Django==1.9
Markdown==2.6.5
PyJWT==1.4.0
defusedxml==0.4.1
oauthlib==1.0.3
pep8==1.6.2
pyflakes==1.0.0
python-social-auth==0.2.13
python3-openid==3.0.9
requests==2.9.1
requests-oauthlib==0.6.0
six==1.10.0
tweepy==3.5.0
But if I add this virtualenv as Project Interpreter in PyCharm,
it shows completely different packages:
These packages are the same as in my system's base interpreter /opt/local/bin/python. This drives me nuts, I really need to use the packages from the virtualenv, not from my system.
This is with PyCharm Community Edition 5.0.3.
I didn't have this problem before with older versions of PyCharm.
I tried creating a completely new virtualenv,
both on the command line and using PyCharm,
and invalidating caches and restarting, but nothing seems to work.
PyCharm always shows the same list of packages,
and the packages of the virtualenv.
Even if I create an empty virtualenv within PyCharm,
it doesn't start empty, but filled with the same list of packages.
My project works perfectly when I run things on the command line,
such as running Django management commands, unit tests, everything.
I only have problems in PyCharm.
If I try to install packages, for example Django,
using PyCharm,
I get this error:
Of course permission denied on /opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages, that's the system interpreter.
It should be trying to install the package there,
but in /Users/janos/dev/git/github/bashoneliners/virtualenv.
Clearly it's not using pip from the virtualenv, but from the system.
I need to make to use the one from the virtualenv.
This is logged as a bug in the issue tracking system of JetBrains,
so hopefully it will get sorted out soon.
https://youtrack.jetbrains.com/issue/PY-18074
A possible workaround is to fall back to a previous version of PyCharm:
https://confluence.jetbrains.com/display/PYH/Previous+PyCharm+Releases
As of 2016 Jan 6, virtualenv works fine for me in PyCharm 4.5.4.
Some of the virtualenv previously registered using PyCharm 5.0.3 appear invalid, but that's fine. I actually deleted all registered interpreters and re-added only the virtualenv I needed.
An odd thing with this older version is that sometimes PyCharm shows the incorrect Python version (2.7 instead of 3.5), but it shows the correct list of modules as per the virtualenv, and the editor doesn't show build errors, so the Python version mixup doesn't seem to cause problems (just a bit scary).
I made custom modification to one of the Django apps in my requirements.txt, the problem is that after deployment I get errors because the I get fresh pip installs from the requirement.txt and the changes I made only work locally. What is the right way to modify pip installed Django apps locally and have those changes also reflect in the deployment environment?
You could host a fork of the library you want to change somewhere like GitHub, and have your requirements.txt point to that particular change. http://codeinthehole.com/writing/using-pip-and-requirementstxt-to-install-from-the-head-of-a-github-branch/ has a good overview of having a pip requirements file point to a source code repository.
I tried below 2 methods to install django_twilio module on Heroku
1) Ran 'heroku run pip install django-twilio'
2) Added 'twilio==3.6.3' to requirements.txt and start the server on heroku.
When I run 'heroku run pip freeze' I can see the twilio entry. But when I go into python and run 'import django_twilio' I get a module not found error.
Please suggest how to fix this on heroku. Same steps worked fine on my local machine.
You didn't add the proper requirement, you only installed the twilio library. Your requirements.txt should include the following line:
django-twilio==0.4
Which will include all the other dependencies you'll need. The full pip freeze, after installing django-twilio looks like this:
Django==1.5.5
django-twilio==0.4
httplib2==0.8
six==1.4.1
twilio==3.6.3
unittest2==0.5.1
As a rule of thumb, always run pip freeze > requirements.txt before pushing an update to Heroku (assuming new dependencies were installed), to make sure you have a complete snapshot of your environment.