how to use uwsgi restart django - django

I have a wsgi.ini file in my project, and I use uwsgi wsgi.ini to run my project.But when I change the django code,I want to restart the project instead kill uwsgi then reload it. The uwsgi official document provide the following methods:
# using kill to send the signal
kill -HUP `cat /tmp/project-master.pid`
# or the convenience option --reload
uwsgi --reload /tmp/project-master.pid
# or if uwsgi was started with touch-reload=/tmp/somefile
touch /tmp/somefile
But I don't have a project-master.pid file in /tmp catalog in my system(centOS).
my question:
how to use uwsgi restart django instead of kill it then start it?
if use uwsgi official document provided method,how to create a .pid file and what content should in this file?

I find the anwser. project-master.pid is set in wsgi.ini file, you should set pidfile=/tmp/project-master.pid first. Then use uwsgi to start server: uwsgi wsgi.ini.After you start it, you can see a project-master.pid file in /tmp catalog. When you want to reload uwsgi server, you can use such command to restart server: uwsgi --reload /tmp/project-master.pid.

I found simplier answer in my opinion, you can just kill your uwsgi process and then spawn it again:
killall uwsgi
And then just run your uwsgi command again.

You don't need to use uWSGI server for your local development needs. Apache/uWSGI are meant for production, and having them restarted implicitly at every code change is not often desirable. In fact, production server not restarting even after the code is changed often acts as a safety net, so that you don't end up restarting the server without finalising the deployment.
Just use inbuild server django provides with itself.
python manage.py runserver 8000

Related

How to run Daphne and Gunicorn At The Same Time?

I'm using django-channels therefore I need to use daphne but for the static files and other things I want to use gunicorn. I can start daphne alongside gunicorn but I can not start both of them at the same time.
My question is should I start both of them at the same time or is there any better option?
If should I how can I do that?
Here is my runing server command:
gunicorn app.wsgi:application --bind 0.0.0.0:8000 --reload && daphne -b 0.0.0.0 -p 8089 app.asgi:application
PS:
I splited location of / and /ws/ for gunicorn and daphne in nginx.conf.
Your problem is you're calling both processes in the same context/line and one never gets called because the first one never "ends".
this process: gunicorn app.wsgi:application --bind 0.0.0.0:8000 --reload
won't terminate at any point so the && command never gets ran unless you kill that command manually, and at that point I'm not sure that wouldn't kill the whole process chain entirely.
if you want to run both you can background both processes with & e.g.
(can't test but this should work)
gunicorn app.wsgi:application --bind 0.0.0.0:8000 --reload & daphne -b 0.0.0.0 -p 8089 app.asgi:application &
The teach a man to fish info on this type of issue is here
I'm fairly certain you will lose the normal console logging you would normally have by running these in the background, as such i'd suggest looking into nohup instead of & or sending the logs somewhere with a logging utility so you aren't flying blind.
As for other options if you plan to scale up to a large number of users, probably 100+ I would just run two servers, one for wsgi django http requests and one for asgi daphne ws requests. Have nginx proxy between the two for whatever you need and you're done. That's also what channels recommends for larger applications.
It is good practice to use a common path prefix like /ws/ to distinguish WebSocket connections from ordinary HTTP connections because it will make deploying Channels to a production environment in certain configurations easier.
In particular for large sites it will be possible to configure a production-grade HTTP server like nginx to route requests based on path to either (1) a production-grade WSGI server like Gunicorn+Django for ordinary HTTP requests or (2) a production-grade ASGI server like Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy where Daphne serves all requests - HTTP and WebSocket - rather than having a separate WSGI server. In this deployment configuration no common path prefix like /ws/ is necessary.
it is not needed to run both.Daphne is a HTTP, HTTP2 and WebSocket protocol server.
take a look at README at this link:
https://github.com/django/daphne

Gunicorn sync workers spawning processes

We're using Django + Gunicorn + Nginx in our server. The problem is that after a while we see lot's of gunicorn worker processes that have became orphan, and a lot other ones that have became zombie. Also we can see that some of Gunicorn worker processes spawn some other Gunicorn workers. Our best guess is that these workers become orphans after their parent workers have died.
Why Gunicorn workers spawn child workers? Why do they die?! And how can we prevent this?
I should also mention that we've set Gunicorn log level to debug and still we don't see any thing significant, other than periodical log of workers number, which reports count of workers we wanted from it.
UPDATE
This is the line we used to run gunicorn:
gunicorn --env DJANGO_SETTINGS_MODULE=proj.settings proj.wsgi --name proj --workers 10 --user proj --group proj --bind 127.0.0.1:7003 --log-level=debug --pid gunicorn.pid --timeout 600 --access-logfile /home/proj/access.log --error-logfile /home/proj/error.log
In my case I deploy in Ubuntu servers (LTS releases, now almost are 14.04 LTS servers) and I never did have problems with gunicorn daemons, I create a gunicorn.conf.py and launch gunicorn with this config from upstart with an script like this in /etc/init/djangoapp.conf
description "djangoapp website"
start on startup
stop on shutdown
respawn
respawn limit 10 5
script
cd /home/web/djangoapp
exec /home/web/djangoapp/bin/gunicorn -c gunicorn.conf.py -u web -g web djangoapp.wsgi
end script
I configure gunicorn with a .py file config and i setup some options (details below) and deploy my app (with virtualenv) in /home/web/djangoapp and no problems with zombie and orphans gunicorn processes.
i verified your options, timeout can be a problem but another one is that you don't setup max-requests in your config, by default is 0, so, no automatic worker restart in your daemon and can generate memory leaks (http://gunicorn-docs.readthedocs.org/en/latest/settings.html#max-requests)
We will use a .sh file to start the gunicorn process. Later you will use a supervisord configuration file. what is supervisord? some external know how information link about how to install supervisord with Django,Nginx,Gunicorn Here
gunicorn_start.sh remember to give chmod +x to the file.
#!/bin/sh
NAME="myDjango"
DJANGODIR="/var/www/html/myDjango"
NUM_WORKERS=3
echo "Starting myDjango -- Django Application"
cd $DJANGODIR
exec gunicorn -w $NUM_WORKERS $NAME.wsgi:application --bind 127.0.0.1:8001
mydjango_django.conf : Remember to install supervisord on your OS. and
Copy this on the configuration folder.
[program:myDjango]
command=/var/www/html/myDjango/gunicorn_start.sh
user=root
autorestart=true
redirect_sderr=true
Later on use the command:
Reload the daemon’s configuration files, without add/remove (no restarts)
supervisordctl reread
Restart all processes Note: restart does not reread config files. For that, see reread and update.
supervisordctl start all
Get all process status info.
supervisordctl status
This sounds like a timeout issue.
You have multiple timeouts going on and they all need to be in a descending order. It seems they may not be.
For example:
Nginx has a default timeout of 60 seconds
Gunicorn has a default timeout of 30 seconds
Django has a default timeout of 300 seconds
Postgres default timeout is complicated but let's pose 60 seconds for this example.
In this example, when 30 seconds has passed and Django is still waiting for Postgres to respond. Gunicorn tells Django to stop, which in turn should tell Postgres to stop. Gunicorn will wait a certain amount of time for this to happen before it kills django, leaving the postgres process as an orphan query. The user will re-initiate their query and this time the query will take longer because the old one is still running.
I see that you have set your Gunicorn tiemeout to 300 seconds.
This would probably mean that Nginx tells Gunicorn to stop after 60 seconds, Gunicorn may wait for Django who waits for Postgres or any other underlying processes, and when Nginx gets tired of waiting, it kills Gunicorn, leaving Django hanging.
This is still just a theory, but it is a very common problem and hopefully leads you and any others experiencing similar problems, to the right place.

django tutorial returns 'ERR_EMPTY_RESPONSE'

I'm following the django tutorial: version 1.8, Ubuntu 10.04, python 3.4 in a virtual environment. I seem to create a django project (yatest) on my Ubuntu server just fine and I start the development server:
(v1)cj#drop1:~/www/yatest$ python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
August 09, 2015 - 04:37:33
Django version 1.8.3, using settings 'yatest.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
but when I browse to http://myserver:8000 all I get in response is 'ERR_EMPTY_RESPONSE'.
This is the first part of the tutorial before an app is even created. At this early stage in the tutorial it doesn't mention any error log I can check. My telnet client doesn't say anything crashed, and 'ctl-c' will shutdown the server process with no complaints.
Using netstat -lntp I verified no other processes are using port 8000. I do not have Apache installed. I do have gunicorn and nginx installed but both are stopped and not in use yet in the tutorial.
I'm rather new to linux; I could use some help finding an error log or other debugging tools to solve this. I don't doubt I've missed some basic OS setting or something to enable TCP access, etc..
Thanks
Clark
Found my mistake. When starting a dev django server on dedicated server one MUST include the dedicated server's address in the command. This is not needed when launching a dev server on the same machine as your browser. So instead of
$python manage.py runserver
you have to run
$python manage.py runserver <server ip>:8000.
So this is my inglorious start on stack exchange. You saw nothing! :P
If you're running natively in an virtual envrionment, then you need to specify a port and address:
python manage.py runserver 127.0.0.1:8000
For containers, it's easiest to listen to all addresses:
python manage.py runserver 0.0.0.0:8000
For anybody using PyCharm in a docker environment, it's also worth knowing that PyCharm will override your docker-compose configuration to change the runserver command to bind to the port specified in the Host option in your Run/Debug Configurations window.
Make sure you set the Host to 0.0.0.0 and the port to 8000 if you want to use the debugger etc.
If you don't want the trouble to determine server ip (ie when you're using containers), you can listen to 0.0.0.0:8000
python manage.py runserver 0.0.0.0:8000

Running Gunicorn behind chrooted nginx inside virtualenv

I can this setup to work if I start gunicorn manually or if I add gunicorn to my django installed apps. But when I try to start gunicorn with systemd the gunicorn socket and service start fine but they don't serve anything to Nginx; I get a 502 bad gateway.
Nginx is running under the "http" user/group, chroot jail. I used pythonbrew to setup the virtualenvs so gunicorn is installed in my home directory under .pythonbrew. The vitualenv directory is owned by my user and the adm group.
I'm pretty sure there is a permission issue somewhere, because everything works if I start gunicorn but not if systemd starts it. I've tried changing the user and group directives inside the gunicorn.service file, but nothing worked; if root start the server then I get no errors and a 502, if my user starts it I get no errors and 504.
I have checked the Nginx logs and there are no errors, so I'm sure it's a gunicorn issue. Should I have the virtualenv in the app directory? Who should be the owner of the app directory? How can I narrow down the issue?
/usr/lib/systemd/system/gunicorn-app.service
#!/bin/sh
[Unit]
Description=gunicorn-app
[Service]
ExecStart=/home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin/gunicorn_django
User=http
Group=http
Restart=always
WorkingDirectory = /home/noel/.pythonbrew/venvs/Python-3.3.0/nlp/bin
[Install]
WantedBy=multi-user.target
/usr/lib/systemd/system/gunicorn-app.socket
[Unit]
Description=gunicorn-app socket
[Socket]
ListenStream=/run/unicorn.sock
ListenStream=0.0.0.0:9000
ListenStream=[::]:8000
[Install]
WantedBy=sockets.target
I realize this is kind of a sprawling question, but I'm sure I can pinpoint the issue with a few pointers. Thanks.
Update
I'm starting to narrow this down. When I run gunicorn manually and then run ps aux|grep gunicorn then I see two processes that are started: master and worker. But when I start gunicorn with systemd there is only one process started. I tried adding Type=forking to my gunicorn.services file, but then I get an error when loading service. I thought that maybe gunicorn wasn't running under the virtualenv or the venv isn't getting activated?
Does anyone know what I'm doing wrong here? Maybe gunicorn isn't running in the venv?
I had a similar problem on OSX with launchd.
The issue was I needed to allow for the process to spawn sub processes.
Try adding Type=forking:
[Unit]
Description=gunicorn-app
[Service]
Type=forking
I know this isn't the best way, but I was able to get it working by adding gunicorn to the list of django INSTALLED_APPS. Then I just created a new systemd service:
[Unit]
Description=hack way to start gunicorn and django
[Service]
User=http
Group=http
ExecStart=/srv/http/www/nlp.com/nlp/bin/python /srv/http/www/nlp.com/nlp/nlp/manage.py run_gunicorn
Restart=always
[Install]
WantedBy=multi-user.target
There must be a better way, but judging by the lack of responses not many people know what that better way is.

Restarting SuperVisor with code changes

I am running a django project with Gunicorn and Nginx with Supervisor. Everything worked fine but when i made some changes to the code it is not recognized by the supervisor and still it reads the old codes. Can you please help me. I tried to restart supervisorctl, it didnt work
If you're talking about python code changes, just use supervisorctl.
supervisorctl restart gunicorn (or whatever you called this)
If you're talking about supervisor configuration changes, use supervisorctl reread before starting your supervisor startup script via supervisorctl start foo
"You can gracefully reload your application in Gunicorn by sending HUP signal: $ kill -HUP masterpid", http://docs.gunicorn.org/en/stable/faq.html
For example, pkill -HUP gunicorn
"Sending HUP signal to the Master Gunicorn process -- Reload the configuration, start the new worker processes with a new configuration and gracefully shutdown older workers.", http://docs.gunicorn.org/en/stable/signals.html