Add requirement for only dependent app - django

I am working on my django project on Linux Ubuntu.
i am not using virualevn. so when i run the command
pip freeze > requirement.txt
it add the hundreds of lines(apps) to my requirement.txt file. I want only to add those app who will need to run this app only.
is there any way to do it?

There's no automatic way to get only the apps you need. You'll have to construct the requirements file manually. It's not that hard to do though - start by looking at all the imports in all your files an add the apps for those imports. Then run your app in a new virtualenv with only those imports - any time it crashes because of a missing import you know that you need to add another one!

Get pip downloaded packages Only
It omits the dependent packages, and can be used to get clean list of downloaded python modules to add in requirement.txt file
comm -12 <(pip list --format=freeze --not-required) <(pip freeze) > requirements.txt
Hope This Helps!
I am not sure whether we should put whatever we received from pip freeze or only required packages in requirement.txt file
I have Asked a question here

Related

using browser-sync with django on docker-compose

I'm doing a project for college and I've never used docker before, I usually use browser-sync when working on static files, but now when I'm using Django on docker-compose (i followed THIS tutorial ), I have no idea how to set it up to work, can anybody give me advice or direct me?
So, i found a solution,
Start by following the tutorial here to set up django with docker-compose, by the end of this you should have a working base django project. then follow the steps below.
How to Use "livereload" With "docker-compose" :
on you host machine, open the command line or the terminal and do :
pip install --upgrade pip
pip install django-livereload-server
pip install psycopg2-binary
PS: i'm using psycopg2 in docker-compose that's why i'm installing it,
if you're using something else, install it instead of psycopg2
Now add to the requirements.txt file (from the tutorial) this line
django-livereload-server
The file should look like this (if you followed the tutorial step by step, you can change according to what db you want to use)
Django==2.0
psycopg2-binary
django-livereload-server
Open the terminal, cd to your projects directory, and do:
docker-compose build
to download the new django-livereload to your docker environment.
Now, you've everything installed,
you need to setup your project to use the django-livereload-server module,
In your project's settings.py
add livereload to INSTALLED_APPS:
INSTALLED_APPS = [
...
'livereload',
...
]
and add the livereload middleware to MIDDLEWARE:
MIDDLEWARE = [
...
'livereload.middleware.LiveReloadScript',
]
and make sure that DEBUG is set to True.
now you can start developing,
open 2 consoles (terminals) in your project's directory
in the first one do :
python manage.py livereload
wait until the server starts, when it's working, leave it running, and in the second terminal do:
docker-compose up
the server in the second terminal is running the django development server, and the server in the first terminal is feeding the first one a livereload.js file, which is used by the django-livereload-server module to inject css, automatically reload html and js when saving, .. etc
PS : make sure the first server (livereload) is working before you launch the second one
i hope this helped !

Unable to run setup.py behind proxy

I'm new to python (and linux) and I'm trying to run the setup.py, however it's not working properly because there's a corporative proxy blocking te request to pypi.
I check this link to properly use the setup.py and also check this and this solutions in stackoverflow but I can't make them work (or I'm wrong in the way I'm applying them).
I'm using:
virtualenv
virtualenvwrapper
python 2.7
Ubuntu 14
I already add the http_proxy and https_proxy in .profile and .bashrc.
When I use pip install --proxy the.proxy:port some_module it's working properly (also I know the env variables do something is because before that I can't even get to stackoverflow.com, so I'm assuming they work just fine).
What I have already tried is:
Trying to use --proxy on python
Look for something similar to --proxy in python
Trying to add the proxy configuration described in one of the solutions mentioned earlier in my setup.py (which is add to the description of this problem)
Tried and successfully downloaded a couple of modules with pip --proxy (this is my current not-so-good-solution)
Messing with the python configuration files in the virtualenv in hope of find some proxy config
My setup.py file looks like this:
from setuptools import setup, find_packages
import requests
with open('development.txt') as file:
install_requires = file.readlines()
with open('development_test.txt') as file_test:
test_requires = file_test.readlines()
setup(
name="my_project",
version="1.0.0-SNAPSHOT",
packages=find_packages(),
install_requires=install_requires,
test_suite="nose.collector",
tests_require=test_requires,
)
proxies = {
"http": "http://proxy.myproxy.com:3333",
"https": "http://proxy.myproxy.com:3333",
}
# not sure what goes here... tried a few things but nothing happend
requests.get("https://pypi.python.org", proxies=proxies)
I'll try any suggestion, any help appreciated.
Thanks
After a deep search about how python works and not being able to find the problem I start looking to how the bash commands work.
It turn out you have to export the http_proxy variables with sudo -E.
A rocky mistake.

django-admin command error while project creation

After upgrading to django 1.9 and tried creating new project.Getting following error
How should i solve this?
After upgrading to django 1.9 and creating new project following error occurred
CommandError: /home/shaastr/ehgg/manage.py already exists, overlaying a project or app into an existing directory won't replace conflicting files
I think you have 2 versions of django installed, and both are being called when trying to start the project.
Try running pip uninstall django twice, if it runs both time then this was what was going on. Obviously, pip install django afterwards to get it working again
I had the same problem after using pip to install django 1.10 over an older version.
I used pip to uninstall and manually deleted the leftover django folder in the site-packages folder.
re-installed using pip, and now it is working with no problem.
I am also working with docker containers. I had this problem where it said that manage.py already exists in the workdirectory (that I made through the Dockerfile file) when I tried to restart the process of making a container after deleting the old one.
It did not show me where the workdirectory was made and hence could not delete the manage.py as pointed out in the error.
The solution that worked was I changed the service name in my yml file and gave the command with new servicenm
docker-compose run servicenm django-admin.py startproject projectnm directory
remove manage.py then re-run your django-admin startproject command, it will work
Make sure that if you have deleted (rm -r) "your Django project_name" to also delete (rm) the manage.py corresponding deleted project python file in the same repository.
sudo pip uninstall django
sudo rm /usr/local/lib/python2.7/dist-packages/django/ -rf
sudo pip install django==1.10
This resolved my problem.
You need to define another directory for your new project. Not /ehgg directory.
It seems though you are creating a new project inside your old project.
And this error clearly state that, there is old setting i.e "manage.py" for your old project. Since every time a new settings manage.py created for your new project.
I hope it's clear to you.
Thank you.
Check whether the project name is correct or not. Django avoids
hypens (-) in project names.
It can happen due to two reasons:
You are trying to create a new folder with the exiting folder name.
You have previously deleted a folder with this name. Deleted it for some reason. But again trying to create package with this name.
To resolve this follow
Rename the manage.py from your project folder.
Go to <%System Path%>/PycharmProjects/<%Your Project Name%>/.idea/workspace.xml
edit this file "workspace.xml" and then search with the package name you are trying to create.
delete that line and save the file.
Now try to run the command again.
I hope this helps.
Regards,

How can i install external pinax projects?

I try to deal with poor documentation of Pinax.
I found this project
https://github.com/pinax/pinax-multiblog-project
what i want is that install it. I normally install a project called account but here i have to install from git . How can i do that in Pinax
Edit:
It turns out the new way of using projects is to just copy the folder and rename it.
Usage instructions:
So to use the multiblog project you would do
Copy the multiblog inside the cloned repo to a new location and rename it to the name you would like to use for your project. Then install the requirements via pip and follow the rest of the steps for setting up a django project
OSX/Linux:
cd ~/src
git clone https://github.com/pinax/pinax-multiblog-project
cp -r ~/src/pinax-project-admin ~/Sites/new_project
cd ~/Sites/new_project"
pip install -r requirements.txt
python manage.py syncdb
I too have been trying to accomplish the same thing. So far I found this commit
https://github.com/nrb/pinax/blob/476d2398c48cc444eb2338c12090f0cebad46961/docs/starterprojects.txt
Relevant section on begins on line 160 near the end
External Starter Projects
=========================
The Pinax ``setup_project`` command can also use starter projects built by third parties.
These can either be plain directory structures, or they may be a git/hg pip editable.
To install a starter project from an external source, simply pass the file path or git/hg
URL to the ``-b`` option::
pinax-admin setup_project -b git+git://github.com/user/project.git#egg=project my_new_project
However all the pinax projects I have come across don't seem to include an egg to use
eg:
https://github.com/pinax/pinax-multiblog-project
https://github.com/pinax/pinax-project-account
I need to use Django 1.4 for my project but the included account base project in both Pinax 0.9a2 and 0.9b1-dev10 use Django 1.3.
I am guessing the external project integration is something we will have to wait for in the Pinax 1.0 release.

django + virtualenv = atomic upgrade - is it possible?

I have a Django project running within a virtualenv with no site-packages. When it comes to pushing my new changes to the server, I would like to create a new virtualenv directory, install my project and all its dependancies, then do a quick renaming of the two virtualenv directories ONLY if the new code deployed successfully.
All is great on paper, till the point you rename the virtualevn directory. Relocate option on virtualenv is not reliable as per its documentation.
How do you suggest upgrading my project ONLY if the new code is proven to be deployable.
Here are the steps:
# fab update_server to run the following:
cd /srv/myvenv # existing instance
cd ../
virtualenv myenv-1
source myenv-1/bin/activate
git co http://my.com/project
pip install -r project/req.txt
# all worked
mv myenv myenv-2; mv myenv-1 myenv
touch /path/proj.wsgi # have apache to reload us
The above is perfect, but renaming or relocating virtualenv is not reliable.
Upgrading the live site within myvenv takes time and may break the site too.
How would you do it?
Buildout?
I do it with symlinks and completely separate release directories. Ie, a deployment involves cloning the entire project into a new directory, building the virtualenv inside that, then switching the "production" symlink to point at that directory.
My layout is basically
/var/www/myapp/
uploads/
tmp/
releases/
001/myapp/
002/myapp/
003/myapp/
ve/
...etc in each release directory...
myapp # symlink to releases/003/myapp/
So, when I deploy to production, my deployment scripts rsync a completely fresh copy to /var/www/myapp/releases/004/myapp/, build a virtualenv in there, install all the packages into it, then
rm -f /var/www/myapp/myapp
ln -s /var/www/myapp/releases/004/myapp/ /var/www/myapp/myapp
My actual deployment script is a little more complicated as I also make sure to keep the previous release around and marked so if I notice that something is really broken, rolling back is just a matter of switching the symlink to point back at the previous one. (some extra work is also necessary to clean up old, unused releases if you are worried about the disk space).
Every external reference (in apache config files or wsgi files or whatever) point at libraries and binaries in the virtualenv as /var/www/myapp/myapp/ve/. I also shun the use of source ve/bin/activate and instead point at the full path in config files and I edit manage.py to use #!ve/bin/python so I can run commands with ./manage.py whatever and it will always work without me having to remember if I've activated the right virtualenv.