Seamless deployment of Django to single server - django

I have a new website built on Django and Python 2.6 which I've deployed to the cloud (buzzword compliant AND the Amazon micro EC2 instance is free!).
Here are my detailed notes: https://docs.google.com/document/d/1qcZ_SqxNcFlGKNyp-CXqcFxXXsKs26Avv3mytXGCedA/edit?hl=en_US
As this is a new site (and wanting to play with the latest and greatest) I used Nginx and Gunicorn on top of Supervisor.
All software installed from trunk using YUM / easy_install.
My database is Sqlite (for now - not sure of where to go next, but that is not the question). Also on the todo list: virtualenv + pip.
So far so good.
My code in in SVN. I wrote a simple fabfile to deploy - checks out the latest code and restarts Gunicorn via Supervisor. I hooked my DNS name to an Elastic IP.
It works.
My question is, how do I update the site without a disruption of service? Users of the site get 404s / 500s when I run my little update script.
Is there a way to do this without adding another server (price is key)?
I would love to have a staging system (on a different port?) and a seamless switch between Staging and Production. On the same (free) server. Via Fabric.
How do I do that? Is it the same Nginx running both sites? Can I upgrade Staging without hurting Production? What would the fabfile look like? What would the directory tree look like?
Thanks!
Tal.
Related:
Seamless deployment in Rails
-

Nginx allows you to setup failover for your reverse proxies you can put one gunicorn instance as the primary and as long as that version is running it will never look at the failover.
If you configure your site so that your new version is in the failover instance you just need to write your fab file to update the failure instance with the new version of the site and then when ready, turn off primary instance. Nginx will seamlessly failover to second instance and bam you are running on new version with no downtime.
You can then update the primary version and then turn it back on and your primary is now live. At this point you can keep the failover instance running just in case, or turn it off.
Some things to consider. You have to be careful with databases, if you are using sqllite make sure both gunicorn instances can accesss the sqllite file.
If you have a normal database this is less of a problem, you just need to make sure you apply any database migrations that the new version needs before you switch to it.
If they are backwards compatible changes then it isn't a big deal. If they aren't backwards compatible then be careful, you could break the old version of the site before you switch over to new version.
To make things easier I would run the versions on different virtual environments.
If you use supervisord to control gunicorn, then you can use the supervisorctl commands to reload/restart which ever instance you want to deploy without affecting the other one.
Hope that helps
Here is an example of and nginx config (not a full config file, removed the unimportant parts)
This assumes the primary gunicorn instance is running on port 9005 and the other is running on port 9006
upstream service-backend {
server localhost:9005; # primary
server localhost:9006 backup; # only used when primary is down
}
server {
listen 80;
root /opt/htdocs;
server_name localhost;
access_log /var/logs/nginx/access.log;
error_log /var/logs/nginx/error.log;
location / {
proxy_pass http://service-backend;
}
}

Sounds like you need to figure out how to tell gunicorn to gracefully restart. It seems like all you have to do is issue a HUP to the gunicorn process when to notify to reload the app. As describe in the link about, the gunicorn docs explain how to do it.

Related

How to configure Elastic IP with django app in aws?

I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.

How could i use let's encrypt behind a django application without stopping the server?

I have a django application running on a server. I want to use let's encrypt to provide an encrypted connection. I could use the standalone option of their ACME client, but i don't want to stop my server, what i would have to do.
So there is the webroot option, that work with my allready running webserver (nginx). Django would process the request in this case. My question is, how should it look like on the django side to get this running (keeping automated renewal several months in mind)?
I don't know what setup others use, but I generally set up Django apps with Nginx serving static content and Gunicorn as the application server. It's widely accepted that Django apps usually use this kind of two web server setup. The standard instructions for setting up Let's Encrypt with Nginx worked fine for me.
Or Digital Ocean have an excellent guide too.
EDIT: It looks like Nginx can do a "graceful" reload that just updates the config with no downtime. For Debian or Ubuntu pre Systemd this would be sudo service nginx reload, while for a distro with Systemd the command is sudo systemctl reload nginx.service.
In case other users come this way like I did from Google, here's how I improved this situation:
I was unsatisfied by my options when it came to creating ACME challenges for Let's Encrypt when running a Django application. So, I rolled my own solution and created a Django app! Basically, you can manage your ACME challenges as just another object, and the app will produce the proper end-point URL.
Yes you are installing an app which means a deploy / update to your app, but once you've done that managing your challenges is far easier in the long run.
Simply pip install django-letsencrypt and follow the README to be on your way.

Shall I restart both nginx and gunicorn when production is updated?

What is the best practice when I have an update for my Django app pushed in my production? Shall I restart both gunicorn and nginx services, with
sudo service gunicorn restart
sudo service nginx restart
or restarting only gunicorn is enough? Finally does the order of the restarts makes any difference if I have to do both the restarts? Thanks!
It entirely depends on how you've configured your box.
To keep downtime to an absolute minimum, I actually load my new release into a different directory on the box while the old release is still running. I create a new virtual environment based on my new release's requirements.txt. Then I start a second instance of gunicorn with the new release running in it (done via supervisord with entries in supervisord.conf), and leave the old instance still running.
I then update my nginx vhost file to point the server to the new release's gunicorn socket, and finally reload nginx. I do a quick check that the new site is up and functioning, and then I stop the old gunicorn instance. If for some reason it's not responding, I switch my nginx config back to point to the old one again, and then go figure out what's wrong.
I do all this using an Ansible script, but here's a great article with some Fabric scripts to do something similar: https://medium.com/#healthchecks/deploying-a-django-app-with-no-downtime-f4e02738ab06
If, on the other hand, you just update your code in-place, then there should be no changes needed to your nginx config, so you shouldn't need to reload it. Just reload gunicorn and you're good to go.

Development and production with docker with multiple sites

Currently I have 3 linode servers:
1: Cache server (Ubuntu, varnish)
2: App server (Ubuntu, nginx, rabbitmq-server, python, php5-fpm, memcached)
3: DB server (Ubuntu, postgresql + pg_bouncer)
On my app-server I have multiple sites (topdomains). Each site is inside a virtualenviroment created with virtualenvwrapper. Some sites are big with a lot of traffic, and some site are small with little traffic.
A typical site consist of python (django), celery (beat, flower) and gunicorn.
My current development pattern now is working inside a staging environment on the app-server and committing changes to git. Then change environment to the production environment and doing a git pull, and a ./manage.py migrate and restarting the process doing sudo supervisorctl restart sitename:, but this takes time! There must be a simpler method!
Therefore it seems like docker could help simplify everything, but I can't decide the best approach for how I could manage all my sites and containers inside each site.
I have looked at http://panamax.io and https://github.com/progrium/dokku, but not sure if one of them fit my needs.
Ideally I would run a development version of each site on my local machine (emulating cache-server, app-server and db-server), do code changes there and test them. When I would see the changes worked, I would execute a command that would do all the heavy lifting and send the changes to the linode servers (I would think mostly the app-server), do all the migration and restarting the project on the server.
Could anyone point me in the right direction as how to achieve this?
I have faced the same problem. I don't claim this is the best possible answer and am interested to see what others have come up with.
There doesn't seem to be any really turnkey solution on Docker yet.
It's also been frustrating that most of the 'Django+Docker' tutorials just focus on a single Django site, so they bundle up the webserver and everything in the same Docker container. I think if you have multiple sites on a server you want them to share a single webserver, but this quickly gets more complicated than presented in the tutorials, which are no longer much help.
Roughly what I came up with is this:
using Fig to manage containers and complicated Docker config that would be tedious to type as commandline options all the time
sites are Django, on uWSGI+Nginx (no reason you couldn't have others though)
I have a git repo per site, plus a git repo for the 'server'
separate containers for db, nginx and each site
each site container has it's own uWSGI instance... I do some config switching so I can either bring up a 'dev' container with uWSGI as acting standalone web server, or a 'live' container where the uWSGI socket is exposed to the main Nginx container, which then takes over as front-side web server.
I'm not sure yet how useful the 'dev' uWSGI servers are, I might switch to just running Django dev server and sharing my local code dir as a volume in the container, so I can edit and get live reloading.
In the 'server' repo I have all the shared Dockerfiles, for Nginx server, base uWSGI app etc.
In the 'server' repo I have made Fabric tasks to do my deployment (checkout server and site repos on the server, build docker images, run fig up etc).
Speaking of deployment, frankly I'm not much keen on the Docker Registry idea. This seems to mean you have to upload hundreds of megabytes of image file to the registry server each time you want to deploy a new container version. This sucks if you are on a limited bandwidth connection at the time and seems very inefficient.
That's why so far I decided to deploy new code via Git and build the new images on the server. I don't use a Docker Registry at all (apart from the public one for a base Ubuntu image). This seems to go against the grain of Docker practice a bit so I'm curious for feedback.
I'd strongly recommend getting stuck in and building your own solution first. If you have to spend time learning a solution like Dokku, Panamax etc that may or may not work for you (I don't think any of them are really ready yet) you may as well spend that time learning Docker directly... it will then be easier to evaluate solutions further down the line.
I tried to get on with Dokku early on in my search but had to abandon because it's not compatible with boot2docker... which means on OS X you're faced with the 'fun' of setting up your own VirtualBox vm to run the Docker daemon. It didn't seem worth the hassle of this when I wasn't certain I wanted to be stuck with how Dokku works at the end of the day.

deploy bitnami django

I am quite computer-illiterate, but I have managed to utilize the Django framework on my own machine. I have had an account on Amazon Web Service (AWS) for some time, but it appeared rather complex to set-up and to make use of, so I put it of for a while. Then I decided to give it a try, and it was not so hard as I first thought to load a AMI and connect to the server with PuTTY. But since I were already using BitNami's Django-Stack, I decided to take a look at their hosting offer (which builds on AWS). Since they appeared to offer "one-click deployment", I set up a new server through their interface. But then, it seems like the "one-click deployment"-promise is with regard to the server itself. There does not seem to be any interface for deploying Django projects through their site. Having used PuTTY already, and adding WinSCP to my machine, I can acceess the server and load my Django-code unto the server. But then I am lost. The documentation seems a bit thin (look here).
The crux of this is the following: Can anyone make this part of the process more understandable. I.e., how to deploy a Django project on a Linux server with Apache/mod_WSGI?
The other question is: I want to use Postgres. Am I free to install this on the server. Should I opt for EBM (EMB?) for this, or what is the downside of not having EBM?
I hope I am not too unworthy of your attention, thanks!
how to deploy a Django project on a Linux server with Apache/mod_WSGI The Bitnami AMI already comes with all this configured. Once installed try going to the EC2 public url on the default 8000 port and you will see the demo django project setup there. You can add your own project once you have logged into the machine via putty check the /home/bitnami/ directory for the demo project. Copy your project, configure your database The other question is: I want to use Postgres. Am I free to install this on the server Postgres and Mysql are already installed the same way you would do on your local machine. The in your project do ./manage.py runserver 0.0.0.0:9000 since the 8000 port is already running another application.