Django web server -- where should I draw the line between production and development? - django

I know that it's bad to use the Django web server in production. There's been at least one Stackoveflow question on this already.
But I'm wondering about where to draw the line between development and production? If I'm only allowing HTTP access to one (or a few) IP addresses, then I know I'm in development. What if I open it to all IP addresses, but only e-mail a couple friends to see what they think of what I've built?
As far as I can tell, the problems with using the Django server are:
It's single-threaded
Security
I don't think (1) is likely to be an issue if I'm only sharing it with a few people. For (2)--what's the worst-case scenario? Does it make a difference that I'm running on an Amazon EC2 server that I could very easily restart from a backup if something bad happened?

Well, the answer is very simple actually, you've left development when you have something you must protect: real user personal information, real data in your database that you'd be afraid to lose, etc.
Security isn't a concern until these things are present. The rule about not using the dev server in "production" is guidance, not mandatory. You can fire up the dev server in your production environment any time you want. However, you'd be silly to do so and then open up universal access to it, once your site is truly live and in use by the world.

Setting up mod_wsgi (or some other WSGI container) on a development machine takes all of 5 minutes, and can help you sort out deployment issues before you actually reach deployment. So really, why ever use the development server if you don't have to?

Related

Why is *SGI + Nginx/HTTP considered the best practice for deploying web applications?

My friend recently asked me the following question: given that Django already has runserver, why didn't wasn't it extended to be a production-ready customer-facing HTTP server? What people do instead is set up an uwsgi server that speaks WSGI and exposes something that Nginx forwards traffic to by reverse proxying...
Based on what I know, many other languages use this pattern: there is a "simple" HTTP server meant for development, as well as an interface for *GI (ASGI/WSGI/FCGI/CGI) that web server is supposed to reverse proxy to. What is the main reason those web servers don't grow production-ready and instead assume presence of another web server?
Here are some of my theories, but I'm not sure if I'm missing something more significant:
History: dynamic websites date back to perl/PHP, both worked as a "dumb" CGI backend that was basically a filter that processed HTTP request (stdin) to a response (stdout). This architecture worked for some time and became a common pattern,
Performance: web applications are often written in languages that don't JIT and having a web server written in such a language would introduce extra overhead while milliseconds matter. Also, this lets us speed up static file serving,
Security: Django's runserver is clearly described as potentially insecure, according to this quote:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay.
The last point seems to suggest that writing a production-ready HTTP server is too complex to fit within Django's goals, what kind of edge cases would need to be supported to get there?
Is any of the points actually valid, or am I missing the elephant in the room here?
Because they don't want to get into the web server business, and I think that's a wise decision.
Creating, developing and most importantly maintaining a web server is not a trivial thing. They couldn't simply write it once and then it's done (in fact, that's pretty much what they did and it's runserver).
Rather than re-invent the wheel, they've chosen to leave it to those who do it best. They're not likely to match the stability and functionality of a proper web server by doing it as a side-project to support running Django applications. They're better spending their time making Django better.
It's also consistent with the UNIX philosophy, but that's not necessary to get into here.

Deploying django in a production server

First of all please let me be clear that I am a windows user and very new to the web world. For the past months I have been learning both python and django, and it has been a great experience for me. Now I have somehow created a small project that I would like to deploy in the production server. Since django has its built-in development server there was no problem for me. But now that I have to deploy it to a production server I googled around and found Nginx + uWSGI or Nginx + Gunicorn as the best option for it. And as uWSGI and Gunicord are incompatible with Windows, I think I should adapt Ubuntu or other Unix system.
So my questions are:
Just to be clear, as I will have to work with one of the above, please explain to me why do I need two servers?
If I have to adapt the Ubuntu environment, do I have to learn Ubuntu shell scripting, SSH and other stuff? Or the hosting provider will help me do that?
Please let me be aware of what else do I need for the above concerned.
Thank you so much for your time and please pardon if my question was a lame question. Hoping for positive response answers.
A typical configuration involves two server processes (which can be run together on the same actual hardware or virtual server) so that the proxy server in front can buffer slow clients. For instance: a slow client will connect to nginx with a request. Nginx will pass the request on to Gunicorn and Gunicorn will respond. Nginx will then consume the Gunicorn response immediately, freeing up the Gunicorn resources right away. At that point, the slow client can take as much time as it wants to consume the response from Nginx without tying up much in the way of server resources. Alternatives to the two-server-process model are to use async workers with Gunicorn and put Gunicorn itself in front, or to use an async-sync combo like Waitress. Nginx in front has the added benefit of doubling as a ready-to-use statics server, though.
Note that "slow clients" can describe: mobile phones that lose their connection and leave the TCP socket hanging until timeout mid-request; mobile phones that are just slow; unreliable connections of all types; hostile denial-of-service clients who are deliberately trying to use server resources; sometimes any old connection that has a hiccup or malfunction for any reason. So this is a problem that will affect nearly any site.
You won't need shell scripting per se but getting used to Ubuntu will take some time. There is a lot to learn even outside of scripting, like how to use the package manager, how to configure packages once they're installed in ways that won't confound future updates, etc. And you will definitely have to learn to use SSH; it is one of the most fundamental server administration tools in the *nix world.
An alternative to learning to use Ubuntu or another server platform is to use a Platform-as-a-Service option like Heroku, as PaaS hosting providers really will take care of all of that stuff for you. I recommend this approach. That having been said, even though I think PaaS is a good option for people who want to focus on development and not server admin regardless of their level of skill, it's also true that a little bit of experience with Linux server platforms goes a long way in helping you to understand the environment that your code runs in. So even if you go with PaaS, you would still benefit from tinkering with Ubuntu a little (or a lot).
Another benefit from a PaaS is that normally their infrastructure handles the Nginx part of the deal (buffering of slow requests via proxy). This is the case with Heroku, for instance. So you won't have to worry about that part of the infrastructure at all.
This part of the question is too broad to answer, but let me know in the comments if you need clarification.
I'm doing it almoast like in this tutorial: http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/
Nginx is my proxy to django app running on gunicorn and its serving statics, virtualenv for my python enviroment, supervisor to watch my app's running.
It's possible you will run in some error's if not using Postgresql, ask then I will help (used MySQL in the past now it's Postgresql)
Firstly, there's no need to use Ubuntu if you're happier with Windows. I don't know if nginx works on Windows, but I'd be very surprised if it doesn't (in fact, here are the nginx docs for installing on Windows). Apache, meanwhile, definitely does work on Windows. The Django documentation has a full explanation of how to set up Apache/mod_wsgi to serve Django.
You don't need two servers. I'm not sure why you think you do: the usual reason for that is to have the static assets on a separate server, but you don't mention that as a reason. Since you're only talking about a small site, though, you don't even need to do that. One server configured to serve both Django and the static assets will do fine. Again, the docs explain exactly how to do that.

Move to 2 Django physical servers (front and backend) from a single production server?

I currently have a growing Django production server that has all of the front end and backend services running on it. I could keep growing that server larger and larger, but instead I want to try and leave that main server as my backend server and create multiple front end servers that would run apache/nginx and remotely connect to the main production backend server.
I'm using slicehost now, so I don't think I can benefit from having the multiple servers run on an intranet. How do I do this?
The first step in scaling your server is usually to separate the database server. I'm assuming this is all you meant by "backend services", unless you give us any more details.
All this needs is a change to your settings file. Change DATABASE_HOST from localhost to the new IP of your database server.
If your site is heavy on static content, creating a separate media server could help. You may even look into a CDN.
The first step usually is to separate the server running actual Python code and the database server. Any background jobs that does processing would probably run on the database server. I assume that when you say front end server, you actually mean a server running Python code.
Now, as every request will have to do a number of database queries, latency between the webserver and the database server is very important. I don't know if Slicehost has some feature to allow you to create two virtual machines that are "close" in terms of network latency(a quick google search did not find anything). They seem like nice guys, so maybe you could ask them if they have such a service or could make an exception.
Anyway, when you do have two machines on Slicehost, you could check the latency between them by simply pinging between them. When you have the result you will probably know if this is at all feasible or not.
Further steps depends on your application. If it is media heavy, then maybe using a separate media server would make sense. Otherwise the normal step is to add more web servers.
--
As a side note, I personally think it makes more sense to invest in real dedicated servers with dedicated network equipment for this kind of setup. This of course depends on what budget you are on.
I would also suggest looking into Amazon EC2 where you can provision servers that are magically close to each other.

Deploying a Custom Program to a Hosting Service

I am a total newbie in servers/hosting etc, although I have some experience in programming in C,Java,etc. So excuse me if the question is 'absurd'.
I recently bought service from a hosting site,namely this(hostmds). I have some code I've written in C++ and I want to run it in the hosting site. So my question is:
Is this possible, or will I have to rewrite everything in a new language?
What should my approach be?
Edit: I have a Shared-Hosting account.
You will have to get a "virtual private server" account from your host in order to do this. This will enable you to compile your program on your host machine and run it essentially as if it were a separate machine under your control.
This means you will also be responsible for maintaining your own HTTP server program (such as Apache, if running on a Linux/Unix host), and your own database servers and other support.
If you have a "shared hosting" account (the most common low cost option) with SSH support, you may be able to compile your program, and even run it, but you will be subject to the whims (capricious or otherwise) of the administrators of your system (that it, you may find that libraries you need are removed or moved around)
What type of hosting is this?
What kind of application is this, is it a daemon?
Depending on the amount of access rights you have, you can run the code in the cgi-bin folder or through the shell of the server.
Depending on the OS/compiler you've used to write your code in you might have to modify some things so that it'll work on the target OS. You should probably add some more details. :)
Many hosting services provide CGI/FastCGI/SCGI that can be used for running C++ webapps. However, it depends on your host whether you can actually do this, as it may be difficult to get binaries built on some other system to run on the web hosting service (if you even can upload them in the first place).
On shell services and virtual servers you can also run daemons (that directly listen to a port), but especially on shell services you cannot listen on low ports (0..1024), for security reasons.
Notice that the cheapest hosting packages generally only allow PHP at most, so you will need something more expensive for more access.
It is best to ask the hosting provider for further information, as these things wildly differ from host to another.

Mac OS X web sharing and Django

I created a web app with Django and I have it running on localhost (http://127.0.0.1:8000/), my question is, how can I make it available to the world, using Mac OS X's web sharing or something?
Thanks!
While you start the server specify the public ip or for any ip use 0.0.0.0
Example:
sudo python manage.py runserver 0.0.0.0:80
If you start your application without ip and port its bind only for loopback which is 127.0.0.1 and will not accessible in your network.
First off, I would strongly suggest you not to serve a website from your Mac. It's a really bad idea™. Both Mac OS X web sharing and Django's included http server (which I assume you're using) are intended for testing purposes only, for a number of reasons concerning speed, security et al. which is frankly too long to post here (but I hope that someone will :)
Second, it's already open to the world: anyone can connect to your computer using your IP address instead of the loopback 127.0.0.1 (unless you're NATted). This, again, is quite useful to test it (and have your friends/colleagues/boss) test it temporarily, but again is not fit for production use. Really.
It depends what your real purpose is, what you mean by "available to the world...or something". If you do want it to be permanently accessible from the web, you need to host it on a server (be it shared or dedicated), you won't keep your Mac turned on forever, will you? :)
For hosting Django on shared hosting - I'd recommend webfaction, step-by-step tutorials on setting up Django project can be found in their screencasts and forums (9.50$ per month for basic plan, with two months money-back guarantee, which actually works, tried myself:). More options in Djangofriendly.com
For dedicated server, ask yourself if you favor managing whole server(OS, web server, database server, memcache, firewall, backups...)yourself. If the answer is "yes", check out Linode, Rackspace, or Slicehost or even amazon web services, but bear in mind it's more expensive, it's way more complicated, but that's what gives you the ultimated flexibility. Once you are ready to try - this is one of the best tutorials i've found in net for a given subject.
If all you need is a proof of concept, that "whatever i can access from my web browser, should be accessible from anywhere in the world", ask your ISP if you are given the private IPaddress. If not, hm, better go for options mentioned above :) If you do, then find out what IP it is by visiting whatismyipaddress.com. Then start the web server as Prashanth suggested, and enter the IP address from whatismyip.org in your browser. Get nothing? a)turn off firewall of MacOSx. still nothing? b)connect your Mac directly to ethernet cable your ISP provides, without router in between. Retry entering your ouside IP in the browser. Works? great, go google "Port forwarding ", this will tell you have to configure your router to have the same effect when router is being used. Doesn't? Ask separate question in stackoverflow and provide as much details about what you are doing as you can.
Mac os Web sharing is uselless if the packets aren't routed correctly to reach your computer on a network. I guess all it can do is start apache, and open some ports in a firewall. But if your personal router or ISP wont forward external packets to your computer - you won't get what you want.
Good luck!