Maintaining 3rd Party Django Apps as Git submodules - django

I know this is perfectly possible and a lot of people already doing it, but my problem is slightly different and I couldn't figure out the solution yet:
Let a 3rd party Django App's structure as below:
django-module
module
init.py
views.py
models.py
requirements.txt
setup.py
I want to bundle only the module directory as a submodule, because then I can access views.py file just by typing "module.views". If I import the django-module directory, I would have to write django-module.module.views to access the module files, which is not feasible.
My purpose is to modify the module and make pull requests occasionally to the original repository. Is there workflow that I can follow, or what are the best practices for this purpose?

Pip tips
Pip has support for editable packages and retrieving packages with git, so you could create a virtualenv, use pip to install the packages, and update them using pip when you want.
So you could add:
-e git://git.myproject.org/MyProject.git#da39a3ee5e6b4b0d3255bfef95601890afd80709#egg=MyProject
To your requirements.txt to retrieve that exact commit.
Suggested workflow
I think that the best way to solve your issue is the following:
Make a private fork of the package
Edit the package in a specific development branch in the forked repository
Use the package from your fork's development branch in your requirements file.
When you feel like it, update the forked package where you're using it using pip.
When you're ready to make a pull request, pull origin, rebase your working branch to origin/master, and make the pull request from the master branch of your fork.
This means you have three places where the code is present:
The original repo (where you don't have access)
Your forked repo (where you work on your fork)
Wherever pip installed it (where you use your fork)

The best practice is to keep the submodules exactly as they are.
Once you've added a 3rd party module to your app as a submodule, the next step is to make sure you add that "django-module" directory to your python path. As long as django-module is on your python path, you will be able to use the submodule as you usually would by typing "module.views" as you wish.

Related

Django: how to group apps in directories?

I am using modified sources from few third party apps in my project. I would like to put these third party apps in a separate directory, so that they are not on the same directory level as my own apps. Is this possible in django?
I tried simply putting the apps in a directory thirdparty and changed my INSTALLED_APPS like so:
INSTALLED_APPS = (
'my_app',
...
'thirdparty.django_messages',
This of course failse with:
ImportError: No module named thirdparty
After which I naturally added __init__.py to the directory. But it fails again:
ImportError: No module named django_messages.apps
Just to avoid any confusion, the app django_messages does contain apps.py
Is there a way to group django apps in a directory or do they all have to be in the same project root directory?
Edit
A better alternative is in the accepted answer by Antoine Pinsard
For those persistent on grouping apps see accepted answer here!
Don't do this. If you really need to modify the source code of third-party apps, fork the repositories so that you will be able to watch and merge upstream updates.
Then install the modified apps with pip.
For instance, if you have forked django-autocomplete-light on your github (let's say https://github.com/dsalaj/django-autocomplete-light):
pip install git+ssh://git#github.com/dsalaj/django-autocomplete-light.git
You will be able to upgrade it like any other pip package:
pip install --upgrade git+ssh://git#github.com/dsalaj/django-autocomplete-light.git
And even add it to your requirements.txt.
As Mad Wombat mentioned in the comments, you can use pip's --editable (-e) option to install these packages in a specific folder within your project. From pip help:
-e, --editable Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url.
Nevertheless, to answer the question. The issue is that the app django_messages considers it is a top-level module (and it is supposed to be). Thus it can import its submodules using an absolute python path (starting with django_messages.). However, when you place it within a module thirdparty, django_messages becomes a submodule of thirdparty. You could add the thirdparty directory to your PYTHON_PATH so that django_messages is available as a top-level module. But it is really not advisable to do so. lib/pythonX.Y/site-packages is the best place for your third party packages and this is where pip installs them.
You may also be interested in python virtualenvs if you don't know what they are.

Using virtualenv with legacy Django projects

I am finally going to start using virtualenv for my Django projects on my development machine. Before I start I want to know if there are any special considerations for dealing with my existing projects. My presumed workflow is something like:
make a new virtualenv
activate the new virtualenv
Install Django in there
pip install all the packages I know I need for my existing project
copy my Django project files, app files, and git files into the project folder within the virtualenv.
Edit
6. make requirements file for deployment
This is obviously very simplified but are there any steps or considerations I am fundamentally missing? Is git going to be happy about moving? Also is it best practice to have a separate virtualenv for each Django project?
I know this is not a typical code problem, but I hope those that know more than I do can point me in the right direction.
Many thanks.
I don't see any big issue on migrating your projects and I think your 5-steps plan is correct, in particular, for steps 3/4/5 (I'd merge them), you can handle project dependencies with pip, possibly using requirement files.
Requirement files are plain text files telling to pip which packages have to be installed in your virtualenv, included your git-tracked projects which eventually can be deployed in your virtual environment as development eggs (they bring with them version control infos).
Once you have a req file, it's a matter of:
pip install -r file.req
to have all needed packages installed in your env.
As you can see from virtualenv docs, a typical req file would contain something like:
django==1.3.0
-e git://git.myproject.org/MyProject.git#egg=MyProject
I usually keep each project in its own virtualenv, so I can deploy it to the production server the same way I do for local development.

Cloning a django project from hg using buildout and using it for development

I have a django project sitting in a bitbucket repository. Is it possible to automate using buildout, the following process:
1. Install django
2. clone the django project from hg repository
3. install the dependency modules of the django project
Update: I have achieved what I wanted to, with the help of mr.developer extension as suggested by Ross.
While doing that I had another question popping up. Which is the best place to specify the dependencies - in the buildout.cfg or in the setup.py of the 'develop' modules? For now I have duplicated the specification.
Generally, you make your checkout the buildout itself, so you'd place buildout.cfg and bootstrap.py in your project root. That way when someone checks out/clones your project, they just do the bootstrap/buildout dannce and they're up and running.
If you have multiple checkouts, then look into mr.developer.

Django + SVN + Deployment

I'm a strong proponent of version control, and am starting work on a Django project. I've done a few before, and have tried a few different approaches, but I haven't yet found a decent structure that I actually feel comfortable with.
Here's what I want:
a) Source code checked into version control
b) Preferably the environment is not checked into version control (something like buildout or pip requirements.txt is fine for setting up the environment)
c) A reasonable "get a new developer going" story
d) A reasonable deployment story - preferably the entire deployment environment could be generated by a script on the server
It seems to me like someone has to have done this before, but many hours of searching have all led to half-baked solutions that don't really address all of these.
Any thoughts on where I should look?
Look at fabric to manage deployments.
This is what I use to manage servers/deployments with fabric: louis (it is just a collection of fabric commands). I keep a louisconf.py file with each project.
I'd recommend using a distributed VCS (git, hg,...) instead of svn. The reason being that the ease of branching allows for several schemes for deployment. You can have, for example, production and staging branches. Then you enforce that the only merges into production happen from staging by convention.
As for getting developers started quickly you have it right with pip and requirements.txt. I think that also means that you are using virtualenv, but if not that's the third piece. I'd recommend getting a basic README in place. Have the first assignment of each developer that joins a project be to update the README.
The rough way to get someone on board is to have her checkout the code, create a virtualenv, and install the requirements.
I'd recommend having a settings.py file that works with sqlite3 and such that a new developer can use to just get going fast (ie after installing the requirements). However, how you manage the different settings files depends on your project layout. There should be some set of default settings for new developers to use, though.
I keep a projects/ directory in my home directory (on Linux). When I need to start a new project, I make a new, shortly-named (that sufficiently describes the project) dir in projects/; that becomes the root of a new virtualenv (with --no-site-packages) for that project.
Inside that dir (after I've installed the venv, sourced it, and installed the copy of django I'll be working with), I "django-admin.py startproject" a subdir, normally by the same short name. That dir becomes the root of my hg repo (with a quick hg init and ci), no matter how small the project.
If there's any chance of sharing the project with other developers (a project for work, for example), I include a pip requirements.txt at the repo root. Only project requirements go in there; django-debug-toolbar and django-extensions, staples for my dev workflow, are not project requirements, for example. South, when we use it, is.
As for the django project, I normally keep the default settings.py, possibly with a few changes, and add the local_settings convention to the end of it (try: from local_settings import *; except ImportError: pass). My and other devs' specific environment settings (adding django-extensions and django-debug-toolbar to installed apps, for example) go in local_settings.py, which is not checked in to version control. To help a new dev out, you could provide a template of that file as local_settings.py.temp, or some other name that won't be used for any other purpose, but I find that this unnecessarily clutters the repo.
For personal projects, I normally include a README if I plan on releasing it publicly. At work, we maintain Trac environments and good communication to get new devs up to speed on a project.
As for deployment, as rz mentioned, I hear fabric is really good for that kind of automated local/remote scripting, though I haven't really taken the chance myself to look into it.
For the uninitiated, a typical shell session for this might look like the following:
$ cd ~/projects/
$ mkdir newproj
$ cd newproj/
$ virtualenv --no-site-packages .
$ source bin/activate
(newproj)$ pip install django django-debug-toolbar django-extensions
... installing stuff ...
(newproj)$ django-admin.py startproject newproj
(newproj)$ cd newproj/
(newproj)$ hg init .; hg ci -A -m "Initial code"

Deploying Django with virtualenv inside a distribution package?

I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.