I have the following config file for Upstart, and it starts the Flask server fine, but whenever there is an exception in the app the log file doesn't have the exception information.
start on [2345]
stop on [06]
respawn
script
cd /var/www/binary-fission/server
export BF_CONFIG=config/staging.py
exec uwsgi --http 0.0.0.0:5000 --wsgi-file server.py --callable app --master --threads 2 --processes 4 --logto /var/log/binary-fission/server.log
end script
However, if I run the same uwsgi command manually without Upstart, the exception is logged.
How do I make upstart+uwgi log the exception from a Flask application?
It turned out that turning on the "PROPAGATE_EXCEPTIONS" option in the flask configuration file (config/staging.py) fixed the issue. This is because in that configuration file, "DEBUG" is turned off which turns "PROPAGATE_EXCEPTIONS" off at the same time.
When I ran uwsgi command manually, I didn't specify the configuration file and my Flask app fell back to the default configuration with "DEBUG" on.
Related
all. Got a weird problem in running gunicorn with supervisor to serve a flask application. Before I start stating my problem, just want to say that I'm aware of this SO post. Nevertheless, my problem was that I could run gunicorn directly in the foreground to server the app. The problem only occurred when I used supervisor.
Here's the structure of my flask application:
myproject/
|---webapp/
| |---__init__.py # create_app function is here
| |--- ... # other files in webapp
|---wsgi.py
|---manage.py
|---venv # gunicorn is in venv/bin
The wsgi.py file looks like the following
from webapp import create_app
app = create_app()
if __name__ == "__main__":
app.run()
Sitting in myproject directory, I can launch the app by running
$ venv/bin/gunicorn -w 4 wsgi:app
Then I created a webapp.conf in /etc/supervisor/conf.d/ directory, as follows.
[program:webapp]
directory=/home/<my username>/projects/myproject
command=/home/<my username>/projects/myproject/venv/bin/gunicorn -w 4 wsgi:app
user=<my username>
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=<some path>
stderr_logfile=<some path>
When I reloaded my supervisor, the stdout log file said
usage: gunicorn [OPTIONS] [APP_MODULE] gunicorn: error: No application module specified.
My question is, why do I get an error here if I don't get an error when running gunicorn directly? What's the difference?
By the way, I then tried to use systemd to run the gunicorn following this post, and systemd can launch the gunicorn service normally.
Any idea what I did wrong with supervisor? Thanks very much.
I have a wsgi.ini file in my project, and I use uwsgi wsgi.ini to run my project.But when I change the django code,I want to restart the project instead kill uwsgi then reload it. The uwsgi official document provide the following methods:
# using kill to send the signal
kill -HUP `cat /tmp/project-master.pid`
# or the convenience option --reload
uwsgi --reload /tmp/project-master.pid
# or if uwsgi was started with touch-reload=/tmp/somefile
touch /tmp/somefile
But I don't have a project-master.pid file in /tmp catalog in my system(centOS).
my question:
how to use uwsgi restart django instead of kill it then start it?
if use uwsgi official document provided method,how to create a .pid file and what content should in this file?
I find the anwser. project-master.pid is set in wsgi.ini file, you should set pidfile=/tmp/project-master.pid first. Then use uwsgi to start server: uwsgi wsgi.ini.After you start it, you can see a project-master.pid file in /tmp catalog. When you want to reload uwsgi server, you can use such command to restart server: uwsgi --reload /tmp/project-master.pid.
I found simplier answer in my opinion, you can just kill your uwsgi process and then spawn it again:
killall uwsgi
And then just run your uwsgi command again.
You don't need to use uWSGI server for your local development needs. Apache/uWSGI are meant for production, and having them restarted implicitly at every code change is not often desirable. In fact, production server not restarting even after the code is changed often acts as a safety net, so that you don't end up restarting the server without finalising the deployment.
Just use inbuild server django provides with itself.
python manage.py runserver 8000
We're using Django + Gunicorn + Nginx in our server. The problem is that after a while we see lot's of gunicorn worker processes that have became orphan, and a lot other ones that have became zombie. Also we can see that some of Gunicorn worker processes spawn some other Gunicorn workers. Our best guess is that these workers become orphans after their parent workers have died.
Why Gunicorn workers spawn child workers? Why do they die?! And how can we prevent this?
I should also mention that we've set Gunicorn log level to debug and still we don't see any thing significant, other than periodical log of workers number, which reports count of workers we wanted from it.
UPDATE
This is the line we used to run gunicorn:
gunicorn --env DJANGO_SETTINGS_MODULE=proj.settings proj.wsgi --name proj --workers 10 --user proj --group proj --bind 127.0.0.1:7003 --log-level=debug --pid gunicorn.pid --timeout 600 --access-logfile /home/proj/access.log --error-logfile /home/proj/error.log
In my case I deploy in Ubuntu servers (LTS releases, now almost are 14.04 LTS servers) and I never did have problems with gunicorn daemons, I create a gunicorn.conf.py and launch gunicorn with this config from upstart with an script like this in /etc/init/djangoapp.conf
description "djangoapp website"
start on startup
stop on shutdown
respawn
respawn limit 10 5
script
cd /home/web/djangoapp
exec /home/web/djangoapp/bin/gunicorn -c gunicorn.conf.py -u web -g web djangoapp.wsgi
end script
I configure gunicorn with a .py file config and i setup some options (details below) and deploy my app (with virtualenv) in /home/web/djangoapp and no problems with zombie and orphans gunicorn processes.
i verified your options, timeout can be a problem but another one is that you don't setup max-requests in your config, by default is 0, so, no automatic worker restart in your daemon and can generate memory leaks (http://gunicorn-docs.readthedocs.org/en/latest/settings.html#max-requests)
We will use a .sh file to start the gunicorn process. Later you will use a supervisord configuration file. what is supervisord? some external know how information link about how to install supervisord with Django,Nginx,Gunicorn Here
gunicorn_start.sh remember to give chmod +x to the file.
#!/bin/sh
NAME="myDjango"
DJANGODIR="/var/www/html/myDjango"
NUM_WORKERS=3
echo "Starting myDjango -- Django Application"
cd $DJANGODIR
exec gunicorn -w $NUM_WORKERS $NAME.wsgi:application --bind 127.0.0.1:8001
mydjango_django.conf : Remember to install supervisord on your OS. and
Copy this on the configuration folder.
[program:myDjango]
command=/var/www/html/myDjango/gunicorn_start.sh
user=root
autorestart=true
redirect_sderr=true
Later on use the command:
Reload the daemon’s configuration files, without add/remove (no restarts)
supervisordctl reread
Restart all processes Note: restart does not reread config files. For that, see reread and update.
supervisordctl start all
Get all process status info.
supervisordctl status
This sounds like a timeout issue.
You have multiple timeouts going on and they all need to be in a descending order. It seems they may not be.
For example:
Nginx has a default timeout of 60 seconds
Gunicorn has a default timeout of 30 seconds
Django has a default timeout of 300 seconds
Postgres default timeout is complicated but let's pose 60 seconds for this example.
In this example, when 30 seconds has passed and Django is still waiting for Postgres to respond. Gunicorn tells Django to stop, which in turn should tell Postgres to stop. Gunicorn will wait a certain amount of time for this to happen before it kills django, leaving the postgres process as an orphan query. The user will re-initiate their query and this time the query will take longer because the old one is still running.
I see that you have set your Gunicorn tiemeout to 300 seconds.
This would probably mean that Nginx tells Gunicorn to stop after 60 seconds, Gunicorn may wait for Django who waits for Postgres or any other underlying processes, and when Nginx gets tired of waiting, it kills Gunicorn, leaving Django hanging.
This is still just a theory, but it is a very common problem and hopefully leads you and any others experiencing similar problems, to the right place.
I have a django website, that uses an environment variable, DJANGO_MODE to decide which settings to use - development or staging. The environment variable is in the bashrc and when running the app using the development server, everything works fine.
But when I run the app using uWSGI, it doesn't seem to notice the environment variable and uses the default(development) settings instead of production.
I run uWSGI in Emperor mode, and other than the environment variable ignoring, everything seems to be running fine. And yes, the user running uWSGI is the same for which the bashrc has DJANGO_MODE set.
The command used to run uWSGI is -
exec uwsgi --emperor /etc/uwsgi/vassals --uid web_user --gid --web_user
And the ini file for the vassal -
[uwsgi]
processes = 2
socket = /tmp/uwsgi.sock
wsgi-file = /home/web_user/web/project_dir/project/wsgi.py
chdir = /home/web_user/web/project_dir
virtualenv = /home/web_user/.virtualenvs/production_env
logger = syslog
chmod-socket = 777
It cannot work as bash config files are read by the bash. You have to set var in the emperor or in the vassal (the second one is a better approach). Just add
env=DJANGO_MODE=foobar
to your config (do not use whitespace).
For my current flask deployment, I had to set up a uwsgi server.
This is how I have created the uwsgi daemon:
sudo vim /etc/init/uwsgi.conf
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /myproject/myproject-env/bin/uwsgi --uid www-data --gid www-data --home /myproject/myproject-env/site/F11/Engineering/ --socket /tmp/uwsgi.sock --chmod-socket --module F11 --callable app --pythonpath /myproject/myproject-env/site/F11/Engineering/ -H /myproject/myproject-env
However after running this successfully: sudo start uwsgi
uwsgi start/running, process 1286
And trying to access the application via browser:
I get a 502 Bad Gateway
and an error entry in nginx error.log:
2013/06/13 23:47:28 [error] 743#0: *296 upstream prematurely closed
connection while reading response header from upstream, client:
xx.161.xx.228, server: myproject.com, request: "GET /show_records/2013/6 HTTP/1.1", upstream:
"uwsgi://unix:///tmp/uwsgi.sock:", host: "myproject.com"
But the sock file has the permission it needs:
srw-rw-rw- 1 www-data www-data 0 Jun 13 23:46 /tmp/uwsgi.sock
If I run the exec command from above in the command line as a process, it works perfectly fine. Why is the daemon not working correctly please?
btw Nginx is running as
vim /etc/nginx/nginx.conf
user www-data;
and vim /etc/nginx/sites-available/default
location / {
uwsgi_pass unix:///tmp/uwsgi.sock;
include uwsgi_params;
}
and it is started as sudo service nginx start
I am running this on Ubuntu 12.04 LTS.
I hope I have provided all the necessary data, hope someone can guide me into the right direction. Thanks.
Finally I have solved this problem after working nearly 2 days on it. I hope this solution will help other flask/uwsgi users that are experiencing a similar problem.
I had two major issues that caused this.
1) The best way to find the problems with a daemon is obviously a log file and a cleaner structure.
sudo vim /etc/init/uwsgi.conf
Change the daemon script to the following:
# file: /etc/init/uwsgi.conf
description "uWSGI server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /home/ubuntu/uwsgi-1.9.12/uwsgi -c /myproject/uwsgi.ini
vim /myproject/uwsgi.ini
[uwsgi]
socket = /tmp/uwsgi.sock
master = true
enable-threads = true
processes = 5
chdir= /myproject/F11/Engineering
module=F11:app
virtualenv = /myproject/myproject-env/
uid = www-data
gid = www-data
logto = /myproject/error.log
This is much cleaner way of setting up the daemon. Also notice the last line how to setup the log file. Initially I had set the log file to /var/log/uwsgi/error.log. After a lot of sweat and tears I realized the daemon is running as www-data and hence can not access the /var/log/uwsgi/error.log since the error.log was owned by root:root. This made the uwsgi fail silently.
I found it much more efficient to just point the log file to my own /myproject, where the daemon has guaranteed access as www-data. And also don't forget to make the whole project accessible to www-data or the daemon will fail with an Internal Server error message. -->
sudo chown www-data:www-data -R /myproject/
Restart uwsgi daemon:
sudo service uwsgi restart
2) Now you have three log files to lookout for:
tail -f /var/log/upstart/uwsgi.log --> Shows problems with your daemon upon start
tail -f /var/log/nginx/error.log --> Shows permission problems when wsgi access is refused, often because /tmp/uwsgi.sock file is owned by root instead of www-data. In that case simply delete the sock file sudo rm /tmp/uwsgi.sock
tail -f /myproject/error.log --> Shows errors thrown by uwsgi in your application
This combination of log files helped me to figure out that I also had a bad import with Flask-Babel in my Flask application. Bad in that sense, that the way I utilized the library was falling back to the system's locale to determine the datetime format.
File "/myproject/F11/Engineering/f11_app/templates/show_records.html", line 25, in block "body"
<td>{{ record.record_date|format_date }}</td>
File "./f11_app/filters.py", line 7, in format_date
day = babel_dates.format_date(value, "EE")
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 459, in format_date
return pattern.apply(date, locale)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 702, in apply
return self % DateTimeFormat(datetime, locale)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 699, in __mod__
return self.format % other
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 734, in __getitem__
return self.format_weekday(char, num)
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 821, in format_weekday
return get_day_names(width, context, self.locale)[weekday]
File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 69, in get_day_names
return Locale.parse(locale).days[context][width]
AttributeError: 'NoneType' object has no attribute 'days'
This is the way I was using the Flask filter:
import babel.dates as babel_dates
#app.template_filter('format_date')
def format_date(value):
day = babel_dates.format_date(value, "EE")
return '{0} {1}'.format(day.upper(), affix(value.day))
The strangest part is that this code is working perfectly fine within the dev environment (!). It works even fine when running the uwsgi as a root process from the command line. But it fails when ran by the www-data daemon. This must have something to do with how the locale is set, which Flask-Babel is trying to fall back to.
When I changed the import like this, it all worked finally with the daemon:
from flask.ext.babel import format_date
#app.template_filter('format_date1')
def format_date1(value):
day = format_date(value, "EE")
return '{0} {1}'.format(day.upper(), affix(value.day))
Hence be careful when using Eclipse/Aptana Studio that is trying to pick the right namespace for your classes in code. It can really turn ugly.
It is now working perfectly fine as a uwsgi daemon on an Amazon Ec2 (Ubuntu 12.04) since 2 days. I hope this experience helps fellow python developers.
I gave up, there was no uwsgi.log generated, and nginx just kept complaining about :
2014/03/06 01:06:28 [error] 23175#0: *22 upstream prematurely closed connection while reading response header from upstream, client: client.IP, server: my.server.IP, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/var/web/the_gelatospot/uwsgi.sock:", host: "host.ip"
for every single request. This only happened if running uwsgi as a service, as a process it started fine under any user. So this would work from command line (page responds):
$exec /var/web/the_gelatospot/mez_server.sh
This didn't (/etc/init/site_service.conf):
description "mez sites virtualenv and uwsgi_django" start on runlevel
[2345] stop on runlevel [06] respawn respawn limit 10 5 exec
/var/web/the_gelatospot/mez_server.sh
The process would start but on each request nginx would complain about the closed connection. Strangely I have this same config working just fine for 2 other apps using the same nginx version and same uwsgi version as well as both apps are mezzanine CMS apps. I tried everything I could think of and what was suggested. In the end I switched to gunicorn which works fine:
#!/bin/bash
NAME="the_gelatospot" # Name of the application
DJANGODIR=/var/web/the_gelatospot # Django project directory
SOCKFILE=/var/web/the_gelatospot/gunicorn.sock # we will communicte using this unix socket
USER=ec2-user
GROUP=ec2-user # the user to run as, the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=settings
#DJANGO_SETTINGS_MODULE=the_gelatospot.settings # which settings file should Django use
#DJANGO_WSGI_MODULE=the_gelatospot.wsgi # WSGI module name
DJANGO_WSGI_MODULE=wsgi
echo "Starting $NAME as `the_gelatospot`"
# Activate the virtual environment
cd $DJANGODIR
source ../mez_env/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
cd ..
# Start your Django GUnicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn -k eventlet ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--bind=unix:$SOCKFILE
And here is the one that wouldn't work as a service (nginx complaining of prematurely closed connection and no app log data coming through).
#!/bin/bash
DJANGODIR=/var/web/the_gelatospot/ # Django project directory
cd $DJANGODIR
source ../mez_env/bin/activate
uwsgi --ini uwsgi.ini
And the uwsgi.ini:
[uwsgi]
uid = 222
gid = 500
socket = /var/web/the_gelatospot/uwsgi.sock
virtualenv = /var/web/mez_env
chdir = /var/web/the_gelatospot/
wsgi-file = /var/web/the_gelatospot/wsgi.py
pythonpath = ..
env = DJANGO_SETTINGS_MODULE=the_gelatospot.settings
die-on-term = true
master = true
chmod-socket = 666
;experiment using uwsgitop
worker = 1
;gevent = 100
processes = 1
daemonize = /var/log/nginx/uwsgi.log
logto = /var/log/nginx/uwsgi.logi
log-maxsize = 10000000
enable-threads = true
I went from gunicorn to uWSGI last year and I had no issues with it till now, it also seemed a bit faster than gunicorn. Right now I'm thinking of sticking to gunicorn. Its getting better, puts up much nicer numbers with eventlet installed, and it is easier to configure.
Hope this workaround helps. I would still like to know the issue with uWSGI and nginx but I'm stumped.
Update:
So using gunicorn allowed me to run the server as a service and while toying in mezzanine I ran into this error: FileSystemEncodingChanged
To fix this I found the solution here:
https://groups.google.com/forum/#!msg/mezzanine-users/bdln_Y99zQw/9HrhNSKFyZsJ
And I had to modify it a bit since I don't use supervisord, I only use upstart and a shell script. I added this to right before I execute gunicorn in my mez_server.sh file:
export LANG=en_US.UTF-8, LC_ALL=en_US.UTF-8, LC_LANG=en_US.UTF-8
exec gunicorn -k eventlet ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--log-level=debug \
--bind=unix:$SOCKFILE
I also tried this fix using uWSGI like so ( a lot shorter since using uwsgi.ini ):
export LANG=en_US.UTF-8, LC_ALL=en_US.UTF-8, LC_LANG=en_US.UTF-8
exec uwsgi --ini uwsgi.ini
I am still sticking to gunicorn since it still worked with the problem and lead me in the right direction to resolve it. I was very disappointed that uWSGI provided no output in the log file even with these params, I only seen the server start process and that's it:
daemonize = /var/log/nginx/uwsgi.log
logto = /var/log/nginx/uwsgi.logi
While nginx kept throwing the disconnect error, uWSGI sat there like nothing is happening.
As a single line running with daemon true command is
gunicorn app.wsgi:application -b 127.0.0.1:8000 --daemon
bind your app with 127.0.0.1:8000 and --deamon force it to run as daemon
but define a gunicorn_config.cfg file and run with -c flag is good practice
for more:
https://gunicorn-docs.readthedocs.org/en/develop/configure.html