Keeping Pipfile Updated - django

I just started a new Django project and use Postgresql as my database, so I installed psycopg2 to make it work properly. When i deployed the project in the beginning the app did not work because psycopg2 was not installed on the production server. As i quickly realized this was because psycopg2 was missing in my pipfile.
So my question is:
Do I have to update the pipfile manually every time i install something for my project? I thought that the pipfile would take care of that automatically every time I install something.
Isn't there anything similar to pip freeze > requirements.txt where I can do the update with one short command?

Do I have to update the pipfile manually every time i install something for my project? I thought that the pipfile would take care of that automatically every time I install something.
requirements.txt is just a file. There is no logic around it that updates that (unless of course you have an IDE that does that). It is not per se the file that is used for the package manager. You can use any file, and you can use multiple files (for example sometimes one makes a requirements_test.txt file that contains extra packages that should be installed when testing the software).
You do not per se need to update the requirements.txt file each time you install software, as long as the requirements.txt file is correct when you deploy software (on another machine), it is fine.
You can however automate this to some extent. If you use git subversioning for example, you can make a pre-commit hook, that will each time run when you commit changes. For example by constructing an executable file in .git/hooks/pre-commit in the repository. Something that might look like:
#!/bin/bash
. env/bin/activate
pip freeze > requirements.txt
Each time you thus make a commit, the requirements.txt will be "harmonized" with the packages installed in the virual environment.

Related

pipenv updates all dependencies bringing breaking changes

I'm having problems with an app that uses Django. Everything is in a docker container, there is a pipfile and a pipfile.lock. So far, so good.
The problem is when I want to install a new depedency. I open the docker container shell, and I install the dependency with pipenv install <package-name>.
After installing the package, pipenv runs a command to update the pipfile.lock file and doing so updates all packages to their last version, bringing whit these updates a lot of breaking changes.
I don't understand why is this happening, I have all packages listed in my pipfile with ~=, this is suppose to avoid updating to versions that can break your app.
I'll give you an example, I have this dependency in my pipfile: dj-stripe = "~=2.4". But, in the pipfile.lock file, after pipenv runs the command lock, that depedency is updated to its last version (2.5.1).
What am I doing wrong?
Are you sure you're installing it within Docker? A common cause of pipfile.lock conflicts is installing a package locally instead of within Docker and then when the local environment syncs with Docker it will override your pipfile.lock.
Assuming you're using docker-compose, this is how I'm installing my packages:
docker-compose exec web pipenv install <package-name>
I discovered what my problem was.
I've been listing the dependencies like this: ~=2.4, I thought that was indicating not to update to 2.5 or greater, but that's not true, that only tells pipenv not to update to 3 or greater.
In order to stay in 2.4 version, I must specify the last number version, for example: ~=2.4.0
That way, I'm telling pipenv not to update from 2.4.

How to install Django project with all modules?

I created a Django's application which use some additional modules like crispy_forms. I would like to send this application to my friends to test it.
But I don't know how can they just install it and run it? Is it possible?
Application using also database PostgreSQL.
What is the simplest way to just run this application from any place with no errors and problems on the start?
I found only information about https://docs.djangoproject.com/en/1.10/intro/reusable-apps/
and packed my app, but I don't know how to install it.
To setup env for project i would install virtualenv, then:
pip install -r requirements.txt
You need to set database connection in settings.py, or switch to sqlite3...
hope this helps!
If you are using virtual environment then activate it and go in your project root.
If you are not using virtualenvironment then do the same thing, go in your project root.
Make sure you have requirements.txt file.
run the command
pip freeze > requirements.txt
This will add automatically all your modules to requirements.txt file
which can be then installed by
pip install -r requirements.txt

Virtualenv - Cleaning up unused package installations

So I have been developing my first django web application over the past few months and I have installed a number of packages that I wanted to try and use to solve some of my problems. However, some of these packages I installed, tried to use, failed, and then never uninstalled.
Is there a way to see what packages my application is using from the list given from "pip freeze"?
That way I can uninstall some of the clutter in my application. Is it a huge disadvantage to have this clutter?
In future development I will uninstall packages right away if I do not use them. So lesson learned :).
A method I use is with my requirements.txt files. From the root of my Django project, I create a requirements/ directory with the following files in it:
requirements/
base.txt
dev.txt
prod.txt
temp.txt
base.txt contains packages to be used in all environments such as Django==1.8.6.
Then dev would include base and other packages, and might look like:
-r base.txt
coverage==4.0.2
Then temp.txt includes dev.txt and contains packages I'm not sure I'll use permanently:
-r dev.txt
temp_package==1.0
git+https://github.com/django/django.git#1014ba026e879e56e0f265a8d9f54e6f39843348
Then I can blow away the entire virtualenv and reinstall it from the appropriate requirements file like so:
pip install -r requirements/dev.txt
Or, to include the temp_package I'm testing:
pip install -r requirements/temp.txt
That's just how I do it, and it helps keep my sandbox separate from the finished product.
Potentially, you could use isort and run isort myproject/* --diff to get a list of proposed changes isort would make to your project.
In the proposed changes, it list imports that are not used. From that, you could take a look at packages installed in your virtual environment and start removing them with pip. This is assuming you did not remove the import statements, which may not be the case.
Another way would be to create a new env and attempt to run your app. Use error messages to get the packages you need until your app runs. Not pretty, but it would work.

How To Preserve Changes To Pip Installed Django Apps After Deployment

I made custom modification to one of the Django apps in my requirements.txt, the problem is that after deployment I get errors because the I get fresh pip installs from the requirement.txt and the changes I made only work locally. What is the right way to modify pip installed Django apps locally and have those changes also reflect in the deployment environment?
You could host a fork of the library you want to change somewhere like GitHub, and have your requirements.txt point to that particular change. http://codeinthehole.com/writing/using-pip-and-requirementstxt-to-install-from-the-head-of-a-github-branch/ has a good overview of having a pip requirements file point to a source code repository.

How to speed jenkins build process while installing requirements using pip

I am using Jenkins CI for my django project. For Django-Jenkins integration I am using the django-jenkins app. In the build step of Jenkins I create a fresh virtualenv and install all the dependencies for each build using requirements file. However, this makes build extremely slow because a fresh copy of all the dependencies must be downloaded from a PyPI mirror, even if nothing has changed in the dependencies since the last build. So I started using the local caching built-in to pip by setting the PIP_DOWNLOAD_CACHE environment variable. But the whole build process is still painfully slow and takes more than 10 minutes. Is there any way I could speed up the whole process? Maybe by caching the compiled dependencies or something else?
Just only install a fresh virtualenv if your requirements.txt file changes. This can be done easily with some shell commands. We are doing something similar in one of our projects. In a Jenkins shell window we have (after svn up):
touch changed.txt
stat -c %Y project/requirements.txt > changed1.txt
diff -q changed.txt changed1.txt || echo "DO YOUR PIP --upgrade HERE!"
Why bother creating a fresh virtualenv each time you build? You should be able to create just one and simply activate it with . /path/to/venv/bin/activate as an 'Execute shell script' build step (assuming the use of linux here). Then, if you need to install a new dependency, you can activate the venv on your own and pip install the new package.