How to gracefully restart django gunicorn? - django

I made a django application using docker.
Currently I send hup signal to gunicorn master process to restart.
If I simply send a hup signal, the workers are likely to restart without fully completing what they were doing.
Does gunicorn support graceful restart just by sending a hup signal to the master process?
If not, how can I gracefully restart the Gunicorn?

Related

Symfony Messenger not shutting down gracefully when using APP_ENV=prod

We are using Symfony Messenger in combination with supervisor running in a Docker container on AWS ECS. We noticed the worker is not shut down gracefully. After debugging it appears it does work as expected when using APP_ENV=dev, but not when APP_ENV=prod.
I made a simple sleepMessage, which sleeps for 1 second and then prints a message for 60 seconds. This is when running with APP_ENV=dev
As you can see it's clearly waiting for the program to stop running.
Now with APP_ENV=prod:
It stops immediately without waiting.
In the Dockerfile we have configured the following to start supervisor. It's based on php:8.1-apache, so that's why STOPSIGNAL has been configured
RUN apt-get update && apt-get install -y --no-install-recommends \
# for supervisor
python \
supervisor
The start-worker.sh script contains this
#!/usr/bin/env bash
cp config/worker/messenger-worker.conf ../../../etc/supervisor/supervisord.conf
exec /usr/bin/supervisord
We do this because certain env variables are only available when starting up.
For debugging purposes the config has been hardcoded to test.
Below is the messenger-worker.conf
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
nodaemon=true ; start in foreground if true; default false
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:messenger-consume]
stderr_logfile_maxbytes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
command=bin/console messenger:consume async -vv --env=prod --time-limit=129600
process_name=%(program_name)s_%(process_num)02d
autostart=true
autorestart=true
numprocs=1
environment=
MESSENGER_TRANSPORT_DSN="https://sqs.eu-central-1.amazonaws.com/{id}/dev-
symfony-messenger-queue"
So in short, when using --env=prod in the config above it doesn't wait for the worker to stop, while with --env=dev it does. Does anybody know how to solve this?
I don't know why there would be a difference between dev & prod environment but it seems you have no grace period set (at least for Supervisor). As I added in the docs:
the workers will be able to handle the SIGTERM signal if you have the PCNTL PHP extension
you need to add stopwaitsecs to your Supervisor program configuration
As you use Docker too, you can also set the graceful period at the service level which defaults to 10s:
services:
my_app:
stop_grace_period: 20s
# ...
With this configuration, running docker-compose down (just an example):
Docker sends a SIGTERM signal to the service entrypoint (Supervisor) and waits 20s for it to exit
Supervisor sends a SIGTERM signal to its programs (messenger:consume commands) and waits 20s for them to exit
the messenger:consume processes will "catch" the signal, finish handling the current message and stop
every program stopped, Supervisor can stop, then the Docker Compose stack
Turns out it was related to the wait_time option related to SQS transports. It probably caused a request that was started just before the container exited and was sent back when the container did not exist anymore. So, wait_time to 0 fixed that problem.
Then there was this which could lead to the same issue

Restarting django-q qcluster gracefully

How can we restart qcluster gracefully for server changes? E.g. We use gunicorn to run django server which lets you gracefully restart workers without down time. How can you restart qcluster workers without disturbing any ongoing worker processing? Thanks.
I tried to restart Django-Q gracefully using the kill -SIGHUB since it is a convention for restarting services gracefully but it didn't work. I noticed Django-Q restart gracefully when receiving CTRL+C command and from there I found the solution.
# -2 is SIGINT. It acts like CTRL+C.
pkill -e -2 --full 'python manage.py qcluster'; (python manage.py qcluster &)

uWSGI downtime when restart

I have a problem with uwsgi everytime I restart the server when I have a code updates.
When I restart the uwsgi using "sudo restart accounting", there's a small gap between stop and start instance that results to downtime and stops all the current request.
When I try "sudo reload accounting", it works but my memory goes up (double). When I run the command "ps aux | grep accounting", it shows that I have 10 running processes (accounting.ini) instead of 5 and it freezes up my server when the memory hits the limit.
accounting.ini
I am running
Ubuntu 14.04
Django 1.9
nginx 1.4.6
uwsgi 2.0.12
This is how uwsgi does graceful reload. Keeps old processes until requests are served and creates new ones that will take over incoming requests.
Read Things that could go wrong
Do not forget, your workers/threads that are still running requests
could block the reload (for various reasons) for more seconds than
your proxy server could tolerate.
And this
Another important step of graceful reload is to avoid destroying
workers/threads that are still managing requests. Obviously requests
could be stuck, so you should have a timeout for running workers (in
uWSGI it is called the “worker’s mercy” and it has a default value of
60 seconds).
So i would recommend trying worker-reload-mercy
Default value is to wait 60 seconds, just lower it to something that your server can handle.
Tell me if it worked.
Uwsgi chain reload
This is another try to fix your issue. As you mentioned your uwsgi workers are restarting in a manner described below:
send SIGHUP signal to the master
Wait for running workers.
Close all of the file descriptors except the ones mapped to sockets.
Call exec() on itself.
One of the cons of this kind of reload might be stuck workers.
Additionaly you report that your server crashes when uwsgi maintains 10 proceses (5 old and 5 new ones).
I propose trying chain reload. DIrect quote from documentation explains this kind of reload best:
When triggered, it will restart one worker at time, and the following worker is not reloaded until the previous one is ready to accept new requests.
It means that you will not have 10 processes on your server but only 5.
Config that should work:
# your .ini file
lazy-apps = true
touch-chain-reload = /path/to/reloadFile
Some resources on chain reload and other kinds are in links below:
Chain reloading uwsgi docs
uWSGI graceful Python code deploy

Restarting SuperVisor with code changes

I am running a django project with Gunicorn and Nginx with Supervisor. Everything worked fine but when i made some changes to the code it is not recognized by the supervisor and still it reads the old codes. Can you please help me. I tried to restart supervisorctl, it didnt work
If you're talking about python code changes, just use supervisorctl.
supervisorctl restart gunicorn (or whatever you called this)
If you're talking about supervisor configuration changes, use supervisorctl reread before starting your supervisor startup script via supervisorctl start foo
"You can gracefully reload your application in Gunicorn by sending HUP signal: $ kill -HUP masterpid", http://docs.gunicorn.org/en/stable/faq.html
For example, pkill -HUP gunicorn
"Sending HUP signal to the Master Gunicorn process -- Reload the configuration, start the new worker processes with a new configuration and gracefully shutdown older workers.", http://docs.gunicorn.org/en/stable/signals.html

How to gracefully restart django running fcgi behind nginx?

I'm running a django instance behind nginx connected using fcgi (by using the manage.py runfcgi command). Since the code is loaded into memory I can't reload new code without killing and restarting the django fcgi processes, thus interrupting the live website. The restarting itself is very fast. But by killing the fcgi processes first some users' actions will get interrupted which is not good.
I'm wondering how can I reload new code without ever causing any interruption. Advices will be highly appreciated!
I would start a new fcgi process on a new port, change the nginx configuration to use the new port, have nginx reload configuration (which in itself is graceful), then eventually stop the old process (you can use netstat to find out when the last connection to the old port is closed).
Alternatively, you can change the fcgi implementation to fork a new process, close all sockets in the child except for the fcgi server socket, close the fcgi server socket in parent, exec a new django process in the child (making it use the fcgi server socket), and terminate the parent process once all fcgi connections are closed. IOW, implement graceful restart for runfcgi.
So I went ahead and implemented Martin's suggestion. Here is the bash script I came up with.
pid_file=/path/to/pidfile
port_file=/path/to/port_file
old_pid=`cat $pid_file`
if [[ -f $port_file ]]; then
last_port=`cat $port_file`
port_to_use=$(($last_port + 1))
else
port_to_use=8000
fi
# Reset so me don't go up forever
if [[ $port_to_use -gt 8999 ]]; then
port_to_use=8000
fi
sed -i "s/$old_port/$port_to_use/g" /path/to/nginx.conf
python manage.py runfcgi host=127.0.0.1 port=$port_to_use maxchildren=5 maxspare=5 minspare=2 method=prefork pidfile=$pid_file
echo $port_to_use > $port_file
kill -HUP `cat /var/run/nginx.pid`
echo "Sleeping for 5 seconds"
sleep 5s
echo "Killing old processes on $last_port, pid $old_pid"
kill $old_pid
I came across this page while looking for a solution for this problem. Everything else failed, so I looked in to the source code :)
The solution seems to be much simpler. Django fcgi server uses flup, which handles the HUP signal the proper way: it shuts down, gracefully. So all you have to do is to:
send the HUP signal to the fcgi server (the pidfile= argument of runserver will come in handy)
wait a bit (flup allows children processes 10 seconds, so wait a couple more; 15 looks like a good number)
sent the KILL signal to the fcgi server, just in case something blocked it
start the server again
That's it.
You can use spawning instead of FastCGI
http://www.eflorenzano.com/blog/post/spawning-django/
We finally found the proper solution to this!
http://rambleon.usebox.net/post/3279121000/how-to-gracefully-restart-django-running-fastcgi
First send flup a HUP signal to signal a restart. Flup will then do this to all of its children:
closes the socket which will stop inactive children
sends a INT signal
waits 10 seconds
sends a KILL signal
When all the children are gone it will start new ones.
This works almost all of the time, except that if a child is handling a request when flup executes step 2 then your server will die with KeyboardInterrupt, giving the user a 500 error.
The solution is to install a SIGINT handler - see the page above for details. Even just ignoring SIGINT gives your process 10 seconds to exit which is enough for most requests.