Pydev + Django workflow. Local(test) + remote synchronization. Using git with django - django

I'm new to django and my very first project is my blog. I wonder how django developers who use pydev normally synchronize with remote hosting server, updating their sites?
I also would like to know, how do you combine usage of git with a django project? Should I just make a repository for the entire project?

At my company we've got an entire git repository for each project, including the Django sources that are put in the PYTHONPATH for each project, making Django versions project dependant. The folder structure is something like:
/.git
/projectname/app1
/projectname/app2
/projectname/manage.py
/django-lib/django/...
As django-lib is not a Python module, we include both / and /django-lib in the PYTHONPATH. If your project is becoming large, you might want to consider using git submodules on your apps.
We've also setup several servers to support the developers. There's a testing server running a central testing database and a setup including Apache with WSGI to make testing on a real server possible, which sometimes is a bit different then the local manage.py the developers use before committing their changes.
The testing server is updated with the master branch of our git repository. We've made several scripts to allow all developers to do this without letting them login to the server via SSH, but that is just during pre-release. After release, that server will become our staging server, and we'll remove all scripts from it to make it just like our production server.
Every developer has setup their local project to make sure that it communicates with the central testing database, containing several test data. I myself push my changes from the commandline, but you could also use EGit for this.
When we've got a release, we put it in a separate branch, called 'release' (obviously) and the production server will pull only from that branch. This is done via SSH, but I don't really know how your server setup looks like, so I guess that that last step is entirely up to you.
I hope that this has helped you a bit. I won't say that this is the best workflow possible, but it works for us and you should figure out what works for you.

Most experienced Django developers use pip(or distribute) and virtualenv deal with all the python packages you might need for your Django projects (including Django itself).
Personally, all I keep in my projects git repository is a bunch of segregated requirements lists generated by pip :
. ~/Dev/environs/$PROJECT_NAME/bin/activate
pip freeze > ./docs/requirements/main.list
I'm fairly sure most django developers would be familiar with Fabric, which I use for :
streamlining local interaction with git and,
pushing to our central repository,
pulling from our production or test server
touching the wsgi on the relevant server
and pretty much any other kind of task you might find yourself using ssh terminal session for.
For those cases where I need to make changes to someone elses django application in order to make it work or suit our purposes, I :
fork it on github,
clone from my forked repo
make the changes
push it up to my own repo
and provide merge requests to the original repo owner
This way, i have a repo where i can use pip requirement lists to keep pulling from until the original application owner gets their own repo updated.

Related

How to visit a git branch of Django project on Nginx/uWSGI server?

I have successfully built several web sites hosted on an Nginx server using Django, uWSGI and virtualenv.
I had never used version control but now I am starting to use Git.
I understand how to create, commit and push branches.
My question is: how to make different Git branches visible at the web address of the site I'm working on?
Do I change the Nginx config file to point somewhere different?
I just updated the dev branch of my project, and of course the site does not reflect the changes.
How can I tell the server to serve the dev branch or the master branch of the project?
I would prefer to avoid a complicated system with different subdomains for different branches — I really just want the simplest thing that will work.
[update] I have found lots of pages that explain complex ways to set up staging servers etc., but I really just want to understand what is going on... there's a giant conceptual hole in my understanding about how the server interacts with a local Git project.
Right now, the Nginx config and the uWSGI config point to a folder like:
/var/www/sitefiles
That is the Django folder (inside it is sitefiles/settings.py etc.).
It is in that folder that I did git init, some commits, branching & pushes.
Does using Git mean that the Nginx and uWSGI config's should point elsewhere?
Its pretty simple goto the path of the project where git is installed and checkout to required banch and touch the wsgi file
git checkout dev
touch project/wsgi.py
or to roll back to master branch
git checkout master
touch project/wsgi.py

Git - How to commit a local repository to a subfolder of another local repository?

I have a Django project that I've started some time ago and I was hosting it at bitbucket. Now I need to host it at openshift, and the way to do that is that they provide you with a git repository and every time you push they deploy automatically. The problem is that the repository comes with several top-level folder for configuration and setup, and the effective django project must be inside a subfolder called wsig/openshift.
My question is, how can I commit my changes from my local django repository to the wsig/openshift subfolder of my local openshift repository? Because I intend to continue to develop on the bitbucket/local repository
You are probably looking for submodules. From the docs:
Submodules allow foreign repositories to be embedded within a
dedicated subdirectory of the source tree, always pointed at a
particular commit.
So you would do and would have the bitbucket repository as a seperate repository embedded in a subfolder of the openshift repository by running
git submodule add path_to_bitbucket folder/in/openshift
in the openshift repository.
You will have to run an occaisonal git submodule update to keep openshift up to date but you probably already expected extra work of that sort.
I had the exact same problem too! Is highly annoying but I took another road:
Why don't you create a Python 2.7 project from scratch? Current Django structure is honestly annoying. The way I did was:
Create an openshift project which was in that annoying structure.
Copy, preserve (in my local FS, not in Openshift) a copy of settings.py and wsgi.py.
Discard that project, and start a bare Python 2.7 project.
Check it out to my local FS, create a Django project in my local fs.
Replace the contents of wsgi and settings accordingly (adapting any possible misplaced paths - it's easier than it looks).
Commit/push (this new structure).
You will do differently in point 4: you will checkout that remote branch (bitbucket) as well, merge it on the openshift branch, change accordingly those files in point 5, and push the openshift branch.
There you have a brand-new project, matching your structure (perhaps you want to configure both remote branches in your environment: openshift and bitbucket).
That's the way I did and honestly I have nothing to regret.
Offtopic, but perhaps would be useful since you're using Django: This is specially important if you want to -also- use (gunicorn|uwsgi)+nginx (with a custom cart. which does not provide apache but nginx, and python), and so cannot use the default Django cart.

Moving from runserver to a production server

I am quite new to programming, and all of my development has been on my local runserver using textmate and terminal. I have written a small app with a few hundred and I'd like to push it to an EC2 server. My only knowledge in terms of 'developing tools' is Django's local runserver, TextMate and Terminal.
What tools or programs should I look into learning to be able to have an effective workflow?Should I be using some sort of IDE over TextMate for this? My main concerns are being able to develop on my local runserver and then painlessly push that to my production server.
As #isbadawi said, use Fabric. It's better than just using the terminal because you can automate things. As far as deployments go, you can simplify it down to: fab -H your.host.com deploy. Inside the file you write commands, a simple one might go:
Cause the server to download the most recent code from SCM
Update the database (syncdb / migrations / what have you)
Cause apache or whatever you're using to reload the configuration
As far as some more general tips go:
If you're using WSGI, put it under source control
Same goes with local settings files, have your deploy script rename them to local_settings.py as part of the build
If you're really going for painless, look into Django hosting services like Gondor or Ep.io. Those will have clients that you can just deploy to pseudo-painlessly, although you will have to change some settings on your side to match theirs better, as there are many many ways to deploy a Django app.
Update: Ep.io is no longer in business as a hosting service. My new go-to is Heroku.
Update 2: I used to link local_settings.py in deployments, but now I'm leaning towards using the DJANGO_SETTINGS_MODULE config variable. See rdegge's "django-skel" settings for a good way to do this.
A DVCS such as git or Mercurial will allow you to develop and test locally, and then push the changes to a remote system for staging and production.

Django testing environment

I have deployed several Django-driven sites, mostly "concept" stuff; nothing serious. Now I'm ready to deploy a real-deal site (for my brother's medical practice), and would like to ensure that I'm doing it correctly.
My central concern, is the testing environment. I had been doing it by maintaining two separate folders with different Mercurial copies of a site, then updating the development branch, merging with the release branch, and then uploading to the server (Webfaction).
How do you manage testing environment for your Django projects?
All development is done on my local machine. I use virtualenv (and virtualenvwrapper) for the multiple projects. With virtualenv, you can have several versions of the same software without having to 'break' other code that may depend on a certain version. I use pip for downloading the proper libraries/applications into these separate environments. For each project (and therefore environment), I have a mercurial repository. If the new development passes all unit tests and works as expected, I send it up to the VCS. Once in the VCS, the code gets reviewed by colleagues.

Good way to deploy django project to a testing sever?

This is specific to my current project. But maybe the answers will reveal some more generic solutions.
Here is the situation:
I develop django project on my Windows box
I use SVN to commit to a SVN repository
while developing I use development server that comes with django
There is a testing server (apache) that runs somewhere else, and everytime i finish something I need to manually copy my work via WinSCP/Putty and make sure it works on testing server
Testing server is accessible for our testers to use and test and report bugs
I would like to automate this process as it is very painful. It involves me to export the whole repository, copy to the testing server, get rid of the pyc files, sometimes restarting apache, use the correct settings.py (usually some renaming).
I would like to for the testing server to automatically retrieve new files after each SVN commit. I could write a custom script to do all this stuff, but something tells me that there are some easy-to-use solutions I could use to change my workflow to make things less painful.
One extra bonus. There is a designer that works on HTML/CSS on the templates directly on the testing server. I need to check whether he made changes and I transfer them to my computer and subsequently to SVN rep. My boss thinks it's too dangerous to give him SVN access. Any ideas to help me out with this, also?
deployment:
I would say its better to do the deployment the same way you do for the production. Fabric is a good solution.
SVN way:
If you want to do it in the SVN way, create a branch called testing, once you have a working version of your code and ready for testing merge the development branch to the testing branch. Make sure you have permission on the testing branch to restrict everyone from merging to the test branch. After merging the test team should take a update to a specific version.
.pyc
It unnecessary to manually remove pyc files you can add a svn hook which can ignore the pyc files on commit. Create a file .svnignore
*.pyc
and run this command
svn -R propset svn:ignore -F .svnignore .
If you’ve already got yourself into the mess of versioning your compiled files, then you can do either of these things.
find -name "*.pyc" -exec svn revert {} \;
find -name "*.pyc" -exec svn delete {} \;
Django settings file
You can set the environment variable through which django can take up corresponding settings file. Django set env
Designer
well designer working directly on the test server is not a bonus point. :) its a headache. In a ideal environment no one should touch the code in the testing server. Create a separate branch for the designer or he can commit to the dev branch which all the developers can merge.
One option is to create a read only svn user and have it checkout the svn repository on the apache server. Then to do a build you run 'svn update'. You can check if the designer modified files by doing a 'svn status'.
If your svn repository is on the same machine as your qa django instance you could use a post-commit hook to have it svn update after each commit, and bounce apache if needed. See http://subversion.tigris.org/faq.html#website-auto-update