Shall I restart both nginx and gunicorn when production is updated? - django

What is the best practice when I have an update for my Django app pushed in my production? Shall I restart both gunicorn and nginx services, with
sudo service gunicorn restart
sudo service nginx restart
or restarting only gunicorn is enough? Finally does the order of the restarts makes any difference if I have to do both the restarts? Thanks!

It entirely depends on how you've configured your box.
To keep downtime to an absolute minimum, I actually load my new release into a different directory on the box while the old release is still running. I create a new virtual environment based on my new release's requirements.txt. Then I start a second instance of gunicorn with the new release running in it (done via supervisord with entries in supervisord.conf), and leave the old instance still running.
I then update my nginx vhost file to point the server to the new release's gunicorn socket, and finally reload nginx. I do a quick check that the new site is up and functioning, and then I stop the old gunicorn instance. If for some reason it's not responding, I switch my nginx config back to point to the old one again, and then go figure out what's wrong.
I do all this using an Ansible script, but here's a great article with some Fabric scripts to do something similar: https://medium.com/#healthchecks/deploying-a-django-app-with-no-downtime-f4e02738ab06
If, on the other hand, you just update your code in-place, then there should be no changes needed to your nginx config, so you shouldn't need to reload it. Just reload gunicorn and you're good to go.

Related

Django code changes don't reflect on production server

I made so changes in one of my model-forms in my Django apps - I added new input fields for the user. I then tested the changes in my environment and everything works fine. I then committed and pushed the changes to remote repo. I pulled the changes on my production server which runs on AWS. I ran pkill -f runserver in terminal to restart the server, however the changes didn't take place. Only the changes regarding html tags were visible (new labels and etc...).
The changes that weren't present are the ones that come from the model-from: new input fields for the user - those were just missing completely from the template page.
What can be causing this behaviour?
you need to restart gunicorn service every time you make changes to the code.
run sudo systemctl restart gunicorn
Do it and changes will reflect.
After you push changes to the production server you also need to migrate the database changes.
On your local repo (where you make your model changes) you run the makemigrations command
python manage.py makemigrations
And after you fetch your changes on your production server you run the migrate command
python manage.py migrate
It depends on the server you're using : nginx, gunicorn, or apache..
If you setup the server using nginx and gunicorn , you could probably try restarting those services manually, but i'd recommend using something like supervisord to ease out the restart procedure..
If ur using apache, probably use
sudo service httpd restart
If you use supervisor then you need to run
sudo supervisorctl reload
I was having the same problem, this command worked for me on AWS - Ubuntu Server 18.04 LTS.

running nginx/postgres with supervisord - required?

In all standard django productions setup templates I've seen, gunicorn is run with supervisor, whereas nginx/postgres are not configured under supervisor.
Any reason? Is this required for a production system? If not, why not?
In this architecture, Gunicorn works as the application server which runs our Django code. Supervisor is just a process management utility which restarts the Gunicorn server if it crashes. The Gunicorn server may crash due to our bad code, but nginx and postgres remain intact. So in the basic config we only look after the gunicorn process through supervisor. Though we could do the same for nginx and postgres too.
You need supervisor for gunicorn because it's an simply server without any tools to restart it, run it at system startup, stop it at system shutdown or reload when it crashes.
Postgresql and nginx can take care of themselves in that aspect, so there is no need for them to be running under supervisor.
Actually, you can just use init.d, upstart or system.d to start, stop and restart gunicorn, supervisor is just easier way to handle such small servers like gunicorn.
Consider also that it is common to run multiple django apps on one system, and that requires multiple separated instances of gunicorn. Supervisor will handle them better than init, upstart or system.d
There is also uWSGI server that won't need supervisor, because it has built-in features to handle multiple instances, starting, stopping and also auto-reloading on code change. Look at uWSGI emperor system.

Restarting Gunicorn/Nginx when changes are made to files

I'm working on developing a web app using Django, hosted on Gunicorn and Nginx. It's getting a bit inconvenient to run "sudo service nginx restart; sudo service gunicorn restart" every time I make a change to the code. Is there a way I can make them restart automatically whenever I make a change, or make it so the changes show up without having to restart?
You could add the '--reload' argument, as mentioned in the gunicorn documentation.
Restart workers when code changes.
This setting is intended for development. It will cause workers to be
restarted whenever application code changes.
Source: http://docs.gunicorn.org/en/latest/settings.html

Does uWSGI need to be restarted when Django code changes?

I'm working on a Django webapp that's running under nginx and uWSGI. When I deploy new Django code (e.g., settings.py), do I need to restart uWSGI? If so, why?
Background: I had a scenario where I updated settings.py and some other code and deployed it. I did not see the changes in the webapp behavior until I restarted uWSGI.
Yes, you need to restart the uWSGI process.
Python keeps the compiled code in memory so it won't get re-read until the process restarts. The django development server (manage.py runserver) actively monitors files for changes, but that won't happen by default with other servers. If you want to enable automatic reloading in uWSGI, the touch-reload and py-auto-reload uWSGI arguments might help.

Seamless deployment of Django to single server

I have a new website built on Django and Python 2.6 which I've deployed to the cloud (buzzword compliant AND the Amazon micro EC2 instance is free!).
Here are my detailed notes: https://docs.google.com/document/d/1qcZ_SqxNcFlGKNyp-CXqcFxXXsKs26Avv3mytXGCedA/edit?hl=en_US
As this is a new site (and wanting to play with the latest and greatest) I used Nginx and Gunicorn on top of Supervisor.
All software installed from trunk using YUM / easy_install.
My database is Sqlite (for now - not sure of where to go next, but that is not the question). Also on the todo list: virtualenv + pip.
So far so good.
My code in in SVN. I wrote a simple fabfile to deploy - checks out the latest code and restarts Gunicorn via Supervisor. I hooked my DNS name to an Elastic IP.
It works.
My question is, how do I update the site without a disruption of service? Users of the site get 404s / 500s when I run my little update script.
Is there a way to do this without adding another server (price is key)?
I would love to have a staging system (on a different port?) and a seamless switch between Staging and Production. On the same (free) server. Via Fabric.
How do I do that? Is it the same Nginx running both sites? Can I upgrade Staging without hurting Production? What would the fabfile look like? What would the directory tree look like?
Thanks!
Tal.
Related:
Seamless deployment in Rails
-
Nginx allows you to setup failover for your reverse proxies you can put one gunicorn instance as the primary and as long as that version is running it will never look at the failover.
If you configure your site so that your new version is in the failover instance you just need to write your fab file to update the failure instance with the new version of the site and then when ready, turn off primary instance. Nginx will seamlessly failover to second instance and bam you are running on new version with no downtime.
You can then update the primary version and then turn it back on and your primary is now live. At this point you can keep the failover instance running just in case, or turn it off.
Some things to consider. You have to be careful with databases, if you are using sqllite make sure both gunicorn instances can accesss the sqllite file.
If you have a normal database this is less of a problem, you just need to make sure you apply any database migrations that the new version needs before you switch to it.
If they are backwards compatible changes then it isn't a big deal. If they aren't backwards compatible then be careful, you could break the old version of the site before you switch over to new version.
To make things easier I would run the versions on different virtual environments.
If you use supervisord to control gunicorn, then you can use the supervisorctl commands to reload/restart which ever instance you want to deploy without affecting the other one.
Hope that helps
Here is an example of and nginx config (not a full config file, removed the unimportant parts)
This assumes the primary gunicorn instance is running on port 9005 and the other is running on port 9006
upstream service-backend {
server localhost:9005; # primary
server localhost:9006 backup; # only used when primary is down
}
server {
listen 80;
root /opt/htdocs;
server_name localhost;
access_log /var/logs/nginx/access.log;
error_log /var/logs/nginx/error.log;
location / {
proxy_pass http://service-backend;
}
}
Sounds like you need to figure out how to tell gunicorn to gracefully restart. It seems like all you have to do is issue a HUP to the gunicorn process when to notify to reload the app. As describe in the link about, the gunicorn docs explain how to do it.