So I have been developing my first django web application over the past few months and I have installed a number of packages that I wanted to try and use to solve some of my problems. However, some of these packages I installed, tried to use, failed, and then never uninstalled.
Is there a way to see what packages my application is using from the list given from "pip freeze"?
That way I can uninstall some of the clutter in my application. Is it a huge disadvantage to have this clutter?
In future development I will uninstall packages right away if I do not use them. So lesson learned :).
A method I use is with my requirements.txt files. From the root of my Django project, I create a requirements/ directory with the following files in it:
requirements/
base.txt
dev.txt
prod.txt
temp.txt
base.txt contains packages to be used in all environments such as Django==1.8.6.
Then dev would include base and other packages, and might look like:
-r base.txt
coverage==4.0.2
Then temp.txt includes dev.txt and contains packages I'm not sure I'll use permanently:
-r dev.txt
temp_package==1.0
git+https://github.com/django/django.git#1014ba026e879e56e0f265a8d9f54e6f39843348
Then I can blow away the entire virtualenv and reinstall it from the appropriate requirements file like so:
pip install -r requirements/dev.txt
Or, to include the temp_package I'm testing:
pip install -r requirements/temp.txt
That's just how I do it, and it helps keep my sandbox separate from the finished product.
Potentially, you could use isort and run isort myproject/* --diff to get a list of proposed changes isort would make to your project.
In the proposed changes, it list imports that are not used. From that, you could take a look at packages installed in your virtual environment and start removing them with pip. This is assuming you did not remove the import statements, which may not be the case.
Another way would be to create a new env and attempt to run your app. Use error messages to get the packages you need until your app runs. Not pretty, but it would work.
Related
I have a number of packages in the requirements.txt file. I need to list which packages are used in the Django project and Which packages are not used in the Django project using the command prompt. Is it possible?
From your question, it's not clear if you're looking for this, but from what I've gathered, I think you may find just what you want in this thread. How to see unused dependencies in requirements.txt?
If you used any virtual environment like VENV, PIPENV etc. You can run you virtual env and in command prompt , use this command: pip freeze >> requirements.txt
This will have all the packages used in this project.
I just started a new Django project and use Postgresql as my database, so I installed psycopg2 to make it work properly. When i deployed the project in the beginning the app did not work because psycopg2 was not installed on the production server. As i quickly realized this was because psycopg2 was missing in my pipfile.
So my question is:
Do I have to update the pipfile manually every time i install something for my project? I thought that the pipfile would take care of that automatically every time I install something.
Isn't there anything similar to pip freeze > requirements.txt where I can do the update with one short command?
Do I have to update the pipfile manually every time i install something for my project? I thought that the pipfile would take care of that automatically every time I install something.
requirements.txt is just a file. There is no logic around it that updates that (unless of course you have an IDE that does that). It is not per se the file that is used for the package manager. You can use any file, and you can use multiple files (for example sometimes one makes a requirements_test.txt file that contains extra packages that should be installed when testing the software).
You do not per se need to update the requirements.txt file each time you install software, as long as the requirements.txt file is correct when you deploy software (on another machine), it is fine.
You can however automate this to some extent. If you use git subversioning for example, you can make a pre-commit hook, that will each time run when you commit changes. For example by constructing an executable file in .git/hooks/pre-commit in the repository. Something that might look like:
#!/bin/bash
. env/bin/activate
pip freeze > requirements.txt
Each time you thus make a commit, the requirements.txt will be "harmonized" with the packages installed in the virual environment.
This may be a silly question, but I would still like to ask:
I am developing a project using Django, CherryPy, and Nginx. I noticed that a file requirement.txt is usually created to indicate the packages and versions installed in the development environment. Suppose the directory of the virtual environment is /home/me/project/python2Venv.
When I deploy my Django project (tango) into production, the project is copied to the production directory:
sudo cp -r /home/me/project/tango /webapps/tango
For the virtual environment, may I just copy the whole directory using the following command or I should install each of the packages into the production environment again according to requirement.txt?
sudo cp -r /home/me/project/python2Venv /webapps/tango/python2Venv
I think virtualenv uses absolute paths in some files so recreating the env and installing the packages via requirements.txt would be more safe.
In my opinion, it is recommended to install the packages with requirements.txt. Copying directory, can end up being a nightmare.
Say in Update 1:
You have 4 packages each with a specific version(pkg1-ver1, pkg2-ver1, pkg3-ver1, pkg-ver1).
In Update 2:
You have upgraded one package to its new version(pkg1-ver2). With requirements.txt you would just upgrade that one package. Instead of the copying all the packages(Although, i am not sure how well copying of the directory would work).
Hope this helps !
You should install packages with the file requirements.txt.
Or you can use virtualenvwrapper. It helps to clone virtual environments locally easily such as cpvirtualenv, rmvirtualenv, etc.
So I was installing an app in order to bootstrap my Django admin interface, and I thought this app was going to be Project specific but it appears to be installed on a global Django level.
http://riccardo.forina.me/bootstrap-your-django-admin-in-3-minutes/
https://github.com/riccardo-forina/django-admin-bootstrapped
My question is how can I uninstall it if I need to do so at a later date? I wanted my project to be as independent as possible. I was also wondering if there was a way of doing the installation within the project so that people that download my repository will automatically get it.
Also some minor questions are that after adding "add django_admin_bootstrapped into the INSTALLED_APPS list before django.contrib.admin" I was not required to run a syncdb command like we usually are when installing models. I guess this applications doesn't creates tables on my database so that is probably why, but I just wanted to know your thoughts.
I know it is a lot to answer but any clarification is appreciated. Thanks.
If you have installed the django app using pip do:
pip uninstall app_name
Or you have to go manually to your site-packages directory and remove it.
After that,
Remove the app from INSTALLED_APPS. django-admin-boostrapped might have overridden your templates. After you are done, do ./manage.py collectstatic and ./manage.py syncdb
If you're writing something that you want other people to use, and it relies on other packages (whether Django apps or more generic Python packages) it's standard to use pip. This makes it easy to install and uninstall packages, and specific versions of those packages. You can then create a requirements.txt file, which you include with your project. This lets other people know what packages are required, and they can easily install them using pip.
So, first off, install pip.
Then you would install django-admin-bootstrapped by doing:
$ pip install django-admin-bootstrapped
You can also install django using pip:
$ pip install django
If you then do this:
$ pip freeze > requirements.txt
you'll end up with a requirements.txt file that lists any packages you've installed with pip, and which version of each. Include that file with your project when you share it with others (on GitHub or wherever). Those people would then do this, to install the same packages:
$ pip install -r requirements.txt
It would also be worth installing and using virtualenv – this lets you have separate environments for your python work, so that when you pip install something it's only available in that environment. Then you can have different versions of packages (eg, different versions of Django) in each environment. Virtualenvwrapper also makes some of the common virtualenv tasks a little easier.
It's a lot to get to grips with at first, as I know from experience, but it's worth doing so and will make development easier in the long term.
As to your other question, it looks like django-admin-bootstrapped doesn't have any models, so it doesn't require any updating of the database after installation.
I am using Jenkins CI for my django project. For Django-Jenkins integration I am using the django-jenkins app. In the build step of Jenkins I create a fresh virtualenv and install all the dependencies for each build using requirements file. However, this makes build extremely slow because a fresh copy of all the dependencies must be downloaded from a PyPI mirror, even if nothing has changed in the dependencies since the last build. So I started using the local caching built-in to pip by setting the PIP_DOWNLOAD_CACHE environment variable. But the whole build process is still painfully slow and takes more than 10 minutes. Is there any way I could speed up the whole process? Maybe by caching the compiled dependencies or something else?
Just only install a fresh virtualenv if your requirements.txt file changes. This can be done easily with some shell commands. We are doing something similar in one of our projects. In a Jenkins shell window we have (after svn up):
touch changed.txt
stat -c %Y project/requirements.txt > changed1.txt
diff -q changed.txt changed1.txt || echo "DO YOUR PIP --upgrade HERE!"
Why bother creating a fresh virtualenv each time you build? You should be able to create just one and simply activate it with . /path/to/venv/bin/activate as an 'Execute shell script' build step (assuming the use of linux here). Then, if you need to install a new dependency, you can activate the venv on your own and pip install the new package.