running nginx/postgres with supervisord - required? - django

In all standard django productions setup templates I've seen, gunicorn is run with supervisor, whereas nginx/postgres are not configured under supervisor.
Any reason? Is this required for a production system? If not, why not?

In this architecture, Gunicorn works as the application server which runs our Django code. Supervisor is just a process management utility which restarts the Gunicorn server if it crashes. The Gunicorn server may crash due to our bad code, but nginx and postgres remain intact. So in the basic config we only look after the gunicorn process through supervisor. Though we could do the same for nginx and postgres too.

You need supervisor for gunicorn because it's an simply server without any tools to restart it, run it at system startup, stop it at system shutdown or reload when it crashes.
Postgresql and nginx can take care of themselves in that aspect, so there is no need for them to be running under supervisor.
Actually, you can just use init.d, upstart or system.d to start, stop and restart gunicorn, supervisor is just easier way to handle such small servers like gunicorn.
Consider also that it is common to run multiple django apps on one system, and that requires multiple separated instances of gunicorn. Supervisor will handle them better than init, upstart or system.d
There is also uWSGI server that won't need supervisor, because it has built-in features to handle multiple instances, starting, stopping and also auto-reloading on code change. Look at uWSGI emperor system.

Related

Shall I restart both nginx and gunicorn when production is updated?

What is the best practice when I have an update for my Django app pushed in my production? Shall I restart both gunicorn and nginx services, with
sudo service gunicorn restart
sudo service nginx restart
or restarting only gunicorn is enough? Finally does the order of the restarts makes any difference if I have to do both the restarts? Thanks!
It entirely depends on how you've configured your box.
To keep downtime to an absolute minimum, I actually load my new release into a different directory on the box while the old release is still running. I create a new virtual environment based on my new release's requirements.txt. Then I start a second instance of gunicorn with the new release running in it (done via supervisord with entries in supervisord.conf), and leave the old instance still running.
I then update my nginx vhost file to point the server to the new release's gunicorn socket, and finally reload nginx. I do a quick check that the new site is up and functioning, and then I stop the old gunicorn instance. If for some reason it's not responding, I switch my nginx config back to point to the old one again, and then go figure out what's wrong.
I do all this using an Ansible script, but here's a great article with some Fabric scripts to do something similar: https://medium.com/#healthchecks/deploying-a-django-app-with-no-downtime-f4e02738ab06
If, on the other hand, you just update your code in-place, then there should be no changes needed to your nginx config, so you shouldn't need to reload it. Just reload gunicorn and you're good to go.

Restarting Gunicorn/Nginx when changes are made to files

I'm working on developing a web app using Django, hosted on Gunicorn and Nginx. It's getting a bit inconvenient to run "sudo service nginx restart; sudo service gunicorn restart" every time I make a change to the code. Is there a way I can make them restart automatically whenever I make a change, or make it so the changes show up without having to restart?
You could add the '--reload' argument, as mentioned in the gunicorn documentation.
Restart workers when code changes.
This setting is intended for development. It will cause workers to be
restarted whenever application code changes.
Source: http://docs.gunicorn.org/en/latest/settings.html

Does uWSGI need to be restarted when Django code changes?

I'm working on a Django webapp that's running under nginx and uWSGI. When I deploy new Django code (e.g., settings.py), do I need to restart uWSGI? If so, why?
Background: I had a scenario where I updated settings.py and some other code and deployed it. I did not see the changes in the webapp behavior until I restarted uWSGI.
Yes, you need to restart the uWSGI process.
Python keeps the compiled code in memory so it won't get re-read until the process restarts. The django development server (manage.py runserver) actively monitors files for changes, but that won't happen by default with other servers. If you want to enable automatic reloading in uWSGI, the touch-reload and py-auto-reload uWSGI arguments might help.

Heroku, Django and celery on RabbitMQ

I'm building a Django project on Heroku.
I understand that gunicorn is recommended as a webserver so I need an event loop type of worker and I use gevent for that.
It seems that monkey patching gevent does most of the work for me so I can have concurrency, but how am I supposed to connect to the RabbitMQ without real threads or jamming the whole loop?
I am baffled by this since Heroku themselves recommend gunicorn, celery and RabbitMQ but I don't see how all of these work together.
Do you understand that celery and gunicorn are used for different purposes?
Gunicorn is the webserver responding to requests made by users, serving them web pages or JSON data.
Celery is an asynchronous task manager, i.e. it lets you run arbitrary python code irrespective of web requests to your server.
Do you understand this distinction?

Deploying Django with Nginx as service

Currently I have a home server (Ubuntu) with nginx running where I use proxy pass in order to pass the requests to django. I am using gevent as my wsgi server.
It all works fine until the server shuts down either because I restart the server for whatever reason or something crashes (electricity). Since nginx is a service, when the server restarts, nginx starts up as well. However my django apps do not. So then I have to manually go to each of my django projects, activate their virtualenvs, and then fire up the gevent process. This is very annoying to say the least.
Is there a standard way of handling all of this automatically?
You need to set up a script for something like Upstart or Supervisor. Personally, I prefer using Supervisor. Here's the script I use to run my gunicorn instances:
[program:gunicorn]
command=/path/to/virtualenv/bin/python manage.py run_gunicorn -c /path/to/gunicorn.conf.py
directory=/path/to/django/project
user=www-data
autostart=true
autorestart=true
redirect_stderr=True
Consider using a process manager to handle this for you. I like supervisor
You tell it how to launch your various processes and then it runs in the background (just like nginx) and will automatically start up on restart and launch your various django backend processes.