I am quite new to programming, and all of my development has been on my local runserver using textmate and terminal. I have written a small app with a few hundred and I'd like to push it to an EC2 server. My only knowledge in terms of 'developing tools' is Django's local runserver, TextMate and Terminal.
What tools or programs should I look into learning to be able to have an effective workflow?Should I be using some sort of IDE over TextMate for this? My main concerns are being able to develop on my local runserver and then painlessly push that to my production server.
As #isbadawi said, use Fabric. It's better than just using the terminal because you can automate things. As far as deployments go, you can simplify it down to: fab -H your.host.com deploy. Inside the file you write commands, a simple one might go:
Cause the server to download the most recent code from SCM
Update the database (syncdb / migrations / what have you)
Cause apache or whatever you're using to reload the configuration
As far as some more general tips go:
If you're using WSGI, put it under source control
Same goes with local settings files, have your deploy script rename them to local_settings.py as part of the build
If you're really going for painless, look into Django hosting services like Gondor or Ep.io. Those will have clients that you can just deploy to pseudo-painlessly, although you will have to change some settings on your side to match theirs better, as there are many many ways to deploy a Django app.
Update: Ep.io is no longer in business as a hosting service. My new go-to is Heroku.
Update 2: I used to link local_settings.py in deployments, but now I'm leaning towards using the DJANGO_SETTINGS_MODULE config variable. See rdegge's "django-skel" settings for a good way to do this.
A DVCS such as git or Mercurial will allow you to develop and test locally, and then push the changes to a remote system for staging and production.
Related
I have successfully built several web sites hosted on an Nginx server using Django, uWSGI and virtualenv.
I had never used version control but now I am starting to use Git.
I understand how to create, commit and push branches.
My question is: how to make different Git branches visible at the web address of the site I'm working on?
Do I change the Nginx config file to point somewhere different?
I just updated the dev branch of my project, and of course the site does not reflect the changes.
How can I tell the server to serve the dev branch or the master branch of the project?
I would prefer to avoid a complicated system with different subdomains for different branches — I really just want the simplest thing that will work.
[update] I have found lots of pages that explain complex ways to set up staging servers etc., but I really just want to understand what is going on... there's a giant conceptual hole in my understanding about how the server interacts with a local Git project.
Right now, the Nginx config and the uWSGI config point to a folder like:
/var/www/sitefiles
That is the Django folder (inside it is sitefiles/settings.py etc.).
It is in that folder that I did git init, some commits, branching & pushes.
Does using Git mean that the Nginx and uWSGI config's should point elsewhere?
Its pretty simple goto the path of the project where git is installed and checkout to required banch and touch the wsgi file
git checkout dev
touch project/wsgi.py
or to roll back to master branch
git checkout master
touch project/wsgi.py
My web server depends on nginx, django, and a lot of python dependencies. I'm wondering if there is a way to create a portable image/script that I can run in a new server and quickly get it up and running.
Is Docker relevant to this?
You should always use git to manage your code. With git you could get your django project in the other server quickly. But just that.
Also you have to migrate your database. Every DB engine have dump options for doing this.
Do not forget to move your static assests. Probably, you've all of them in one directory.
What about your nginx, database installation and configuration? Here is relevant Docker.
With all of this, you should migrate successfully.
Currently I have 3 linode servers:
1: Cache server (Ubuntu, varnish)
2: App server (Ubuntu, nginx, rabbitmq-server, python, php5-fpm, memcached)
3: DB server (Ubuntu, postgresql + pg_bouncer)
On my app-server I have multiple sites (topdomains). Each site is inside a virtualenviroment created with virtualenvwrapper. Some sites are big with a lot of traffic, and some site are small with little traffic.
A typical site consist of python (django), celery (beat, flower) and gunicorn.
My current development pattern now is working inside a staging environment on the app-server and committing changes to git. Then change environment to the production environment and doing a git pull, and a ./manage.py migrate and restarting the process doing sudo supervisorctl restart sitename:, but this takes time! There must be a simpler method!
Therefore it seems like docker could help simplify everything, but I can't decide the best approach for how I could manage all my sites and containers inside each site.
I have looked at http://panamax.io and https://github.com/progrium/dokku, but not sure if one of them fit my needs.
Ideally I would run a development version of each site on my local machine (emulating cache-server, app-server and db-server), do code changes there and test them. When I would see the changes worked, I would execute a command that would do all the heavy lifting and send the changes to the linode servers (I would think mostly the app-server), do all the migration and restarting the project on the server.
Could anyone point me in the right direction as how to achieve this?
I have faced the same problem. I don't claim this is the best possible answer and am interested to see what others have come up with.
There doesn't seem to be any really turnkey solution on Docker yet.
It's also been frustrating that most of the 'Django+Docker' tutorials just focus on a single Django site, so they bundle up the webserver and everything in the same Docker container. I think if you have multiple sites on a server you want them to share a single webserver, but this quickly gets more complicated than presented in the tutorials, which are no longer much help.
Roughly what I came up with is this:
using Fig to manage containers and complicated Docker config that would be tedious to type as commandline options all the time
sites are Django, on uWSGI+Nginx (no reason you couldn't have others though)
I have a git repo per site, plus a git repo for the 'server'
separate containers for db, nginx and each site
each site container has it's own uWSGI instance... I do some config switching so I can either bring up a 'dev' container with uWSGI as acting standalone web server, or a 'live' container where the uWSGI socket is exposed to the main Nginx container, which then takes over as front-side web server.
I'm not sure yet how useful the 'dev' uWSGI servers are, I might switch to just running Django dev server and sharing my local code dir as a volume in the container, so I can edit and get live reloading.
In the 'server' repo I have all the shared Dockerfiles, for Nginx server, base uWSGI app etc.
In the 'server' repo I have made Fabric tasks to do my deployment (checkout server and site repos on the server, build docker images, run fig up etc).
Speaking of deployment, frankly I'm not much keen on the Docker Registry idea. This seems to mean you have to upload hundreds of megabytes of image file to the registry server each time you want to deploy a new container version. This sucks if you are on a limited bandwidth connection at the time and seems very inefficient.
That's why so far I decided to deploy new code via Git and build the new images on the server. I don't use a Docker Registry at all (apart from the public one for a base Ubuntu image). This seems to go against the grain of Docker practice a bit so I'm curious for feedback.
I'd strongly recommend getting stuck in and building your own solution first. If you have to spend time learning a solution like Dokku, Panamax etc that may or may not work for you (I don't think any of them are really ready yet) you may as well spend that time learning Docker directly... it will then be easier to evaluate solutions further down the line.
I tried to get on with Dokku early on in my search but had to abandon because it's not compatible with boot2docker... which means on OS X you're faced with the 'fun' of setting up your own VirtualBox vm to run the Docker daemon. It didn't seem worth the hassle of this when I wasn't certain I wanted to be stuck with how Dokku works at the end of the day.
I am quite a noob when it comes to deploying a Django project. I'd like to know what are the various methods to deploy Django project and which one is the most preferred.
The Django documentation lists Apache/mod_wsgi, Apache/mod_python and FastCGI etc.
mod_python is deprecated now, one should use mod_wsgi instead.
Django with mod_wsgi is easy to setup, but:
you can only use one python version at a time [edit: you even can only use the python version mod_wsgi was compiled for]
[edit: seems if I'm wrong on mod_wsgi not supporting virtualenv: it does]
So for multiple sites (targeting different django/python versions) on a server mod_wsgi is not
the best solution.
FastCGI can be used with virtualenv, also with different python versions, as you run it with
./manage.py runfcgi …
and then configure your webserver to use this fcgi interface.
The new, hot stuff about django deployment seems to be gunicorn. It's a webserver that implements wsgi and is typically used as backend with a "big" webserver as proxy.
Deployment with gunicorn feels a lot like fcgi: you run a process doing the django processing stuff with manage.py, and a webserver as frontend to the world.
But gunicorn deployment has some advantages over fcgi:
speed - I didn't find the sources, but benchmarks say fcgi is not as fast as the f suggests
config files, for fcgi you must do all configuration on the commandline when executing the manage.py command. This comes unhandy when running multiple django instances via an init.d (unix-like OS' system service startup). It's always the same cmdline, with just different configuration files
gunicorn can drop privileges: no need to do this in your init.d script, and it's easy to switch to one user per django instance
gunicorn behaves more like a daemon: writing pidfile and logfile, forking to the background etc. makes again using it in an init.d script easier.
Thus, I would suggest to use the gunicorn solution, unless you have a single site on a single server with low traffic, than you could use the wsgi solution. But I think in the long run you're more happy with gunicorn.
If you have a django only webserver, I would suggest to use nginx as frontendproxy, as it's the best performing (again this is based on benchmarks I read in some blogposts - don't have the url anymore).
Personally I use apache as frontendproxy, as I need it for other sites hosted on the server.
A simple setup instruction for django deployment could be found here:
http://ericholscher.com/blog/2010/aug/16/lessons-learned-dash-easy-django-deployment/
My init.d script for gunicorn is located at github:
https://gist.github.com/753053
Unfortunately I did not yet blog about it, but an experienced sysadmin should be able to do the required setup.
Use the Nginx/Apache/mod-wsgi and you can't go wrong.
If you prefer a simple alternative, just use Apache.
There is a very good deployment document: http://lethain.com/entry/2009/feb/13/the-django-and-ubuntu-intrepid-almanac/
I myself have faced a lot of problems in deploying Django Projects and automating the deployment process. Apache and mod_wsgi were like curse for Django Deployment. There are several tools like Nginx, Gunicorn, SupervisorD and Fabric which are trending for Django deployment. At first I used/configured them individually without Deployment automation which took a lot of time(I had to maintain testing as well as production servers for my client and had to update them as soon as a new feature was tested and approved.) but then I stumbled upon django-fagungis, which totally automates my Django Deployment from cloning my project from bitbucket to deploying on my remote server (it uses Nginx, Gunicorn, SupervisorD, Fabtic and virtualenv and also installs all the dependencies on the fly), all with just three commands :) You can find more about it in my blog post here. Now I even don't have to get involved in this process(which used to take a lot of my time) and one of my junior developers runs those three commands of django-fagungis mentioned here on his local machine and we get a crisp new copy of our project deployed in minutes without any hassle:)
I'm new to django and my very first project is my blog. I wonder how django developers who use pydev normally synchronize with remote hosting server, updating their sites?
I also would like to know, how do you combine usage of git with a django project? Should I just make a repository for the entire project?
At my company we've got an entire git repository for each project, including the Django sources that are put in the PYTHONPATH for each project, making Django versions project dependant. The folder structure is something like:
/.git
/projectname/app1
/projectname/app2
/projectname/manage.py
/django-lib/django/...
As django-lib is not a Python module, we include both / and /django-lib in the PYTHONPATH. If your project is becoming large, you might want to consider using git submodules on your apps.
We've also setup several servers to support the developers. There's a testing server running a central testing database and a setup including Apache with WSGI to make testing on a real server possible, which sometimes is a bit different then the local manage.py the developers use before committing their changes.
The testing server is updated with the master branch of our git repository. We've made several scripts to allow all developers to do this without letting them login to the server via SSH, but that is just during pre-release. After release, that server will become our staging server, and we'll remove all scripts from it to make it just like our production server.
Every developer has setup their local project to make sure that it communicates with the central testing database, containing several test data. I myself push my changes from the commandline, but you could also use EGit for this.
When we've got a release, we put it in a separate branch, called 'release' (obviously) and the production server will pull only from that branch. This is done via SSH, but I don't really know how your server setup looks like, so I guess that that last step is entirely up to you.
I hope that this has helped you a bit. I won't say that this is the best workflow possible, but it works for us and you should figure out what works for you.
Most experienced Django developers use pip(or distribute) and virtualenv deal with all the python packages you might need for your Django projects (including Django itself).
Personally, all I keep in my projects git repository is a bunch of segregated requirements lists generated by pip :
. ~/Dev/environs/$PROJECT_NAME/bin/activate
pip freeze > ./docs/requirements/main.list
I'm fairly sure most django developers would be familiar with Fabric, which I use for :
streamlining local interaction with git and,
pushing to our central repository,
pulling from our production or test server
touching the wsgi on the relevant server
and pretty much any other kind of task you might find yourself using ssh terminal session for.
For those cases where I need to make changes to someone elses django application in order to make it work or suit our purposes, I :
fork it on github,
clone from my forked repo
make the changes
push it up to my own repo
and provide merge requests to the original repo owner
This way, i have a repo where i can use pip requirement lists to keep pulling from until the original application owner gets their own repo updated.