My setup is django 1.3 and the default mod_wsgi and apache packages for ubuntu 10.04. I tested one view of my app on my development VM (DEBUG and debugging toolbar off):
ab -n 200 -c 5 http://127.0.0.1/
and got 4 requests per second. This seemed slow so I simplified the queries, used indexes, etc. to the point where debugging toolbar tells me I have 4 queries taking 8ms. Running the same test, I only get 8 requests per second. The CPU seems to be at 100% the whole time. This seems quite slow for what is now a quite simple view, but it is just a low powered VM.
I decided to start up a large ec2 instance (4 cpu) to see what kind of performance I would get on that class of machine and was surprised to only get 13 requests per second. How can I change the configuration of apache/mod_wsgi to get more performance out of this class of machine?
I think I am using worker rather than prefork:
$ /usr/sbin/apache2 -l
Compiled in modules:
core.c
mod_log_config.c
mod_logio.c
worker.c
http_core.c
mod_so.c
My worker configuration is:
<IfModule mpm_worker_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
and my WSGI settings look like this:
WSGIScriptAlias / /home/blah/site/proj/wsgi.py
WSGIDaemonProcess blah user=blah group=blah processes=1 threads=10
WSGIProcessGroup blah
Thanks very much for your help!
NOTE: I tried the ab test from another instance and got the same result
Make sure keep-alive is off.
More processes and single-threaded I have seen better performance where CPU is the limiting factor; try processes=4 threads=1.
The best way to tweak mod_wsgi is to not use it :)
First: I don't think your problem is the web server: with mod_wsgi you can get hundreds requests/s. You can get better results with caching and with DB connection pooling. If you're using postgres, take a look at pgpool II: http://pgpool.projects.postgresql.org/ .
However, if you want to go deeper into wsgi web servers, read carefully this nice benchmark: http://nichol.as/benchmark-of-python-web-servers .
If you don't need asyncronous workers, gunicorn is a good choice. It's very easy to setup (you can run it with manage.py run_gunicorn) and it's pretty fast. If you want to be sure that mod_wsgi is not the cuprit, try it. If you want better performance go with gevent or uWSGI.
But the Web Server won't change your benchmark: you can go from 4 req/s to 4.01 req/s.
Related
According to the django-debug-toolbar my CPU Time is around 31000ms (on average). This is true for my own pages as well as for the admin. Here is the breakdown when loading http://127.0.0.1:8000/admin/:
Resource usage
User CPU time 500.219 msec
System CPU time 57.526 msec
Total CPU time 557.745 msec
Elapsed time 30236.380 msec
Context switches 11 voluntary, 1345 involuntary
Browser timing (Timing attribute / Milliseconds since navigation start (+length))
domainLookup 2 (+0)
connect 2 (+0)
request 7 (+30259)
response 30263 (+3)
domLoading 30279 (+1737)
domInteractive 31154
domContentLoadedEvent 31155 (+127)
loadEvent 32016 (+10)
As far as I understand the "request" step [7 (+30259)] is the biggest bottleneck here. But what does that tell me? The request panel just shows some variables, and no GET nor POST data.
The same code works fine hosted on pythonanywhere, locally I am running a MacBook Air (i5, 1.3 Ghz, 8GB RAM). The performance hasn't been this poor all the time. IIRC it happened "over night". One day I started the dev server and it was slow. Didn't change anything in the code or DB.
Is it right to assume that it could be an issue with my local machine?
EDIT:
I tried running ./manage.py runserver --noreload but the performance didn't improve. Also, starting the dev-server (using ./manage.py runserver) also takes around 40s and accessing the DB using postico takes around 1 minute. Starting the dev-sever while commenting out the database from django's settings makes load times normal.
Solved it.
This post pointed me in the right direction. Essentially, I ended up "reseting" my hostfile according to this post. My hostfile now looks like this:
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
Does not, however, explain what caused the sudden "overnight" issue in the first place. My guess: renamed hostname. A couple of days ago my hostname ended up sounding something along the lines of android- (similar to this) which was apparently caused by me using Android's file share tool. Ended up renaming my hostname to my username (see instructions below).
Perform the following tasks to change the workstation hostname using
the scutil command. Open a terminal. Type the following command to
change the primary hostname of your Mac: This is your fully qualified
hostname, for example myMac.domain.com sudo scutil --set HostName Type
the following command to change the Bonjour hostname of your Mac: This
is the name usable on the local network, for example myMac.local. sudo
scutil --set LocalHostName Optional: If you also want to change the
computer name, type the following command: This is the user-friendly
computer name you see in Finder, for example myMac. sudo scutil --set
ComputerName Flush the DNS cache by typing: dscacheutil -flushcache
Restart your Mac.
from here
Didn't test the "renaming hostname" theory, though.
I'm running a Django app on Apache 2.4.7 with mod_wsgi 3.4. The whole setup is on an EC2 ubuntu instance. Ever since I deployed the app, the server has gone down with 504/503 errors every few days with this message in the logs:
Script timed out before returning headers: wsgi.py
I've searched extensively but all I can conclude is I have a memory leak somewhere? I can't seem to figure out what's actually going wrong, since my Django install is pretty vanilla. This is the relevant part of my conf file:
WSGIScriptAlias / /home/ubuntu/projects/appname/app/app/app/wsgi.py
WSGIDaemonProcess app python-path=/home/ubuntu/projects/appname/app user=ubuntu
WSGIProcessGroup app
WSGIApplicationGroup %{GLOBAL}
Could this be from some third party library? The only extras I have installed are ImageMagick and exiftool, the latter of which isn't being used. Is there anything else I can do to debug?
Does your application call out to any backend services?
If you are getting 503/504 and that message, then it would generally indicate that your code is either hanging on backend services or that your code is blocking indefinitely on thread locks.
So basically all request threads become busy and get used up.
If they didn't provide such an ancient version of mod_wsgi then newer versions do at least have better options to combat this sort of problem in your application and recover automatically, plus log information to help you work out why.
For such an old version, you can set 'inactivity-timeout' option to WSGIDaemonProcess to '60' as a way of recovering, but this will also have the effect of restarting your application after 60 seconds if it wasn't receiving any requests as well, which may in itself not be ideal for some apps. In newer versions the inactivity timeout is separated from the concept of a request timeout.
I'd like to open up my django app to other machines in the office during development.
I understand that it's a bad idea to run the django development server as root. The recommended way to serve a django app on port 80, even during development, appears to be django, plus gunicorn, plus nginx. This seems super complicated to me. I got the first two steps working, but am now staring at nginx in utter confusion. There's no mac build on the site. Do I really have to build it from the source?
One alternative I've come across is localtunnel. But this seems sketchy to me, and involves setting up public keys and whatnot. Is there any simpler way to serve a django app on a mac from port 80 without running it as root?
Also, just what are the risks of running a django development server on port 80 as root, vs not as root? What are the chances that someone could, say, gain total access to my file system? And, given the default user settings on a mac, is this more likely if I'm running my django dev server as root than if I'm running it as not-root?
Since you mentioned you don't want to run the Django server as root and you are on a mac, you could forward traffic from port 80 to port 8000:
sudo ipfw add 100 fwd 127.0.0.1,8000 tcp from any to any 80 in
and then run the Django server as a normal user (by default it serves on port 8000)
./manage.py runserver
To remove the port forwarding, run:
sudo ipfw flush
I wanted to run my django application using apache and uWSGI. So I installed apache which use worker_module. When I finally run my app and tested its performance using httperf I noticed that system is able to serve only one user at the same time. The strange thing is that when I run uWSGI using the same command as below with nginx I can serve 97 concurrent users. Is it possible that apache works so slow?
My apache configuration looks like (most important elements - the extant settings are default):
<IfModule mpm_worker_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxClients 63
MaxRequestsPerChild 0
</IfModule>
...
<Location />
SetHandler uwsgi-handler
uWSGISocket 127.0.0.1:8000
</Location>
I run uwsgi using:
uwsgi --socket :8000 --chmod-socket --module wsgi_app --pythonpath /home/user/directory/uwsgi -p 6
I recommend that you put Apache behind Nginx. For example:
bind Apache to 127.0.0.1:81
bind nginx to 0.0.0.0:80
make nginx proxy domains that Apache should serve
It's not a direct answer to your question but that's IMHO the best solution:
best performance
best protection for Apache
allows to migrate Apache websites to Nginx step by step (uWSGI supports PHP now ...), again for best performance and security
I am using the fastcgi C/C++ toolkit, to develop a test fastcgi app.
I built (and am now testing) this example provided by the toolkit.
I have loaded Apache mod_fcgid and successfully restarted the apache2 daemon. However, when I try to access the fastcgi resource, it is returning a blank page.
Note: I made the following changes to the example code (as it didn't work with the default socket fd value of 0):
int sock_fd = FCGX_OpenSocket(":5000", 1);
FCGX_InitRequest(&request, sock_fd, 0);
My /etc/apache2/mods-enabled/fcgid.conf file looks like this:
<IfModule mod_fcgid.c>
AddHandler fcgid-script .fcgi
SocketPath /var/lib/apache2/fcgid/sock
IPCConnectTimeout 10
IPCCommTimeout 20
OutputBufferSize 0
MaxRequestsPerProcess 500
</IfModule>
My /etc/apache2/mods-enabled/fcgid.load file looks like this:
LoadModule fcgid_module /my/path/here/libs/mod_fcgid.so
I then accessed the 'resource' in a browser using the following url:
http://127.0.0.1:5000
What am I doing wrong? (assuming that someone has actually managed to get the example cited above, to work)
I am developing/testing on Linux Ubuntu 10.x
i don't use apache for while, but i think your url is bad
i assume your apache run on port 80 and your echo.fcgi is at root of apache folder
for information, i use nginx for serve fcgi application
http://localhost/echo.fcgi
Run apachctl -D to verify your have mod_fcgid running. I believe no output (i.e a white page) occurs when your process crashes.
You will need to compile your program with debugging (-O0 -ggdb), redeploy, and restart Apache.
Change your Apache config to only spawn one process. This will allow you to attach gdb to your FCGI application and debug it.
Let me know if you need further assistance.