Django runserver style restart from within a django app - django

Pls read no further if you're squeamish or pious about django!...
It turns out one of the several reasons you shouldn't use django "runserver" development server in production is it's horrible with memory, storing everything it sends or receives indefinitely. Other than that though it works just fine for what my client needs. Seemingly when I change a file and runserver automatically restarts all that memory is freed. So is there an easy way for me to replicate that functionality within the app's code or can I trigger it somehow? A somehow that's less awful than appending a CRLF to a file that it's watching ;) Sorry for even mentioning that Django puritans! BTW dev platform is linux64, deployment is Win64.

Sadly you're going to ignore this but it bears mentioning for anyone else considering this.
The runserver is not suitable for a production environment. This isn't a "puritan" issue. It's just completely unsuitable for that use and to not set up a real server is just lazy.
The runserver is not stable. Several error types are not handled appropriately and result in the server crashing or becoming stuck.
The runserver cannot serve more than a single request at a time. This includes static file requests. Try having 2-3 people using a runserver host at the same time. Have fun with that.
The runserver has not had any security audits done. It likely has large exploitable holes and no effort has been made to detect or eliminate them.

Just try to use touch settings.py

Related

Django - Development server alternatives

Is there good alternatives to the django developement server (runserver) that are more performant,
especially in concurency and static serving, and that have the auto-reload function, without having to setup a full blown production environment ?
Im working on Windows so gunicorn cannot be used.
You can install and use the rungevent commant. It has auto-reload function and it's more performant than thread-based servers (it is greenlet-oriented). The only caveat is the static file serving: you must install a webserver or proxy like nginx for that.
Are you doing so high bulk tests in ur dev server so you suffer this -specially regarding static files-? If so, then you must emulate, as said, a productive environment (just have an nginx correctly configured pointing to the address:port you use for your rungevent command).
If static files is not your problem, install a rungevent command and try how it works.
No since dev sites are made to handle limited requests, runserver runs fine on a machine that can match the requirements of your app.
If you are dealing with a large scale dev project which your system cannot tolerate, then it's either time to reproduce a production environment or upgrade.
I find it difficult to believe that your application is that bad in terms of performance, again if you are trying to test the behavior of a full production site (in terms of DB entries etc) then its time to emulate the production environment.
If that is not the case, then I would start checking the underlying models / code of the project.
Well, if you don't want to use django dev server you will have to spend some time to setup anyway. But the good part is that you can do it only once. Sequential deploying will take very little time.
Not so much time ago I switched from fastcgi to uWSGI and it made my life much easier.
uWSGI is awesome! It has autoreload (which works both in daemon mode and when launched directly in terminal). When launched in terminal you can use debugger (e.g. pdb) during request just like you do in django dev-server. And of course you can debug with print in simple cases.
I'm using it with nginx which serves both static and uWSGI but it of course can be any server.
The most useful feature for me in this configuration is that you use the same thing both for dev and production. For simple projects after developing you just turn off autoreload and a few other options and it's ready.

Deploying django in a production server

First of all please let me be clear that I am a windows user and very new to the web world. For the past months I have been learning both python and django, and it has been a great experience for me. Now I have somehow created a small project that I would like to deploy in the production server. Since django has its built-in development server there was no problem for me. But now that I have to deploy it to a production server I googled around and found Nginx + uWSGI or Nginx + Gunicorn as the best option for it. And as uWSGI and Gunicord are incompatible with Windows, I think I should adapt Ubuntu or other Unix system.
So my questions are:
Just to be clear, as I will have to work with one of the above, please explain to me why do I need two servers?
If I have to adapt the Ubuntu environment, do I have to learn Ubuntu shell scripting, SSH and other stuff? Or the hosting provider will help me do that?
Please let me be aware of what else do I need for the above concerned.
Thank you so much for your time and please pardon if my question was a lame question. Hoping for positive response answers.
A typical configuration involves two server processes (which can be run together on the same actual hardware or virtual server) so that the proxy server in front can buffer slow clients. For instance: a slow client will connect to nginx with a request. Nginx will pass the request on to Gunicorn and Gunicorn will respond. Nginx will then consume the Gunicorn response immediately, freeing up the Gunicorn resources right away. At that point, the slow client can take as much time as it wants to consume the response from Nginx without tying up much in the way of server resources. Alternatives to the two-server-process model are to use async workers with Gunicorn and put Gunicorn itself in front, or to use an async-sync combo like Waitress. Nginx in front has the added benefit of doubling as a ready-to-use statics server, though.
Note that "slow clients" can describe: mobile phones that lose their connection and leave the TCP socket hanging until timeout mid-request; mobile phones that are just slow; unreliable connections of all types; hostile denial-of-service clients who are deliberately trying to use server resources; sometimes any old connection that has a hiccup or malfunction for any reason. So this is a problem that will affect nearly any site.
You won't need shell scripting per se but getting used to Ubuntu will take some time. There is a lot to learn even outside of scripting, like how to use the package manager, how to configure packages once they're installed in ways that won't confound future updates, etc. And you will definitely have to learn to use SSH; it is one of the most fundamental server administration tools in the *nix world.
An alternative to learning to use Ubuntu or another server platform is to use a Platform-as-a-Service option like Heroku, as PaaS hosting providers really will take care of all of that stuff for you. I recommend this approach. That having been said, even though I think PaaS is a good option for people who want to focus on development and not server admin regardless of their level of skill, it's also true that a little bit of experience with Linux server platforms goes a long way in helping you to understand the environment that your code runs in. So even if you go with PaaS, you would still benefit from tinkering with Ubuntu a little (or a lot).
Another benefit from a PaaS is that normally their infrastructure handles the Nginx part of the deal (buffering of slow requests via proxy). This is the case with Heroku, for instance. So you won't have to worry about that part of the infrastructure at all.
This part of the question is too broad to answer, but let me know in the comments if you need clarification.
I'm doing it almoast like in this tutorial: http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/
Nginx is my proxy to django app running on gunicorn and its serving statics, virtualenv for my python enviroment, supervisor to watch my app's running.
It's possible you will run in some error's if not using Postgresql, ask then I will help (used MySQL in the past now it's Postgresql)
Firstly, there's no need to use Ubuntu if you're happier with Windows. I don't know if nginx works on Windows, but I'd be very surprised if it doesn't (in fact, here are the nginx docs for installing on Windows). Apache, meanwhile, definitely does work on Windows. The Django documentation has a full explanation of how to set up Apache/mod_wsgi to serve Django.
You don't need two servers. I'm not sure why you think you do: the usual reason for that is to have the static assets on a separate server, but you don't mention that as a reason. Since you're only talking about a small site, though, you don't even need to do that. One server configured to serve both Django and the static assets will do fine. Again, the docs explain exactly how to do that.

Django web server -- where should I draw the line between production and development?

I know that it's bad to use the Django web server in production. There's been at least one Stackoveflow question on this already.
But I'm wondering about where to draw the line between development and production? If I'm only allowing HTTP access to one (or a few) IP addresses, then I know I'm in development. What if I open it to all IP addresses, but only e-mail a couple friends to see what they think of what I've built?
As far as I can tell, the problems with using the Django server are:
It's single-threaded
Security
I don't think (1) is likely to be an issue if I'm only sharing it with a few people. For (2)--what's the worst-case scenario? Does it make a difference that I'm running on an Amazon EC2 server that I could very easily restart from a backup if something bad happened?
Well, the answer is very simple actually, you've left development when you have something you must protect: real user personal information, real data in your database that you'd be afraid to lose, etc.
Security isn't a concern until these things are present. The rule about not using the dev server in "production" is guidance, not mandatory. You can fire up the dev server in your production environment any time you want. However, you'd be silly to do so and then open up universal access to it, once your site is truly live and in use by the world.
Setting up mod_wsgi (or some other WSGI container) on a development machine takes all of 5 minutes, and can help you sort out deployment issues before you actually reach deployment. So really, why ever use the development server if you don't have to?

Is there an "easy" way to get mod_wsgi to reflect Django updates?

I'm reading http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode but it seems like way too much work, I've been restarting my apache2 server gracefully whenever I make tweaks to Django code as it inconsistently picks up the right files and probably tries to rely on cached .pycs.
I setup Django using mod_wsgi using the steps outlined at this blog post.
It automatically reflects updates (although every now and then, there will be a delay for a few minutes - never figure out why nor is it that much of an inconvenience).
If you are having to restart your Apache server then you can't be using mod_wsgi daemon mode. Use daemon mode and then simply touching the WSGI script file when an atomic set of changes have been completed isn't that hard and certainly safer than a system which restarts arbitrarily when it detects any single change. If you do want automatic restart based on code changes, then that is described in that document as well. For a Django slant on it, read:
http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html
http://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html
What is it about what is documented there which is 'way too much work'?

Updating live server from VCS

I run all my Django sites as SCGI daemons. I won't get into the fundamentals of why I do this but this means that when a site is running, there is a set of processes running from the following command:
/websites/website-name/manage.py runfcgi method=threaded host=127.0.0.1 port=3036 protocol=scgi
All is fine until I want to roll out a new release from the VCS (Bazaar in my case). I made an alias script called up that does the following:
alias up='bzr up; killall manage.py'
It is this generic for one simple reason: I'm lazy. I want one command that I can use under any site to update it. I'm logged into the server most of the time anyway, so, I just hop into the root of the right site and call up. Site updates from BZR and restarts.
The first downside of this is it kills all the manage.py processes on the machine. Currently 6 sites and growing rapidly. The second (and potentially worse -- at least for end-users) is it's a severely non-graceful restart. If somebody was uploading an image or doing something else with a long connection time, their request would just die on the vine.
So what I'm looking for is suggestions for a single method that:
Is generic for lazy people like me (eg I can run it from any site root without having to remember which command I need to call; 'up' is perfect in name.
Only kills the current site. I'm only updating the current site, so only this one should die.
Does the restart in a graceful manner. If possible, it should wait until there are no more active connections. I've no idea how feasible this is.
Instead of killing everything with manage.py in the name, could you write a script for each site that only kills manage.py processes from that site? (Edit: just write the scripts and put them in the root of each site (which you cd to anyway) and run those – still only one command to remember)
I don't know enough about SCGI or Bazaar to suggest much more than that... My method (I'm lazy too) uses Mercurial and Fabric for deployment: http://stevelosh.com/blog/entry/2009/1/15/deploying-site-fabric-and-mercurial/ – maybe it will give you an idea you can use?