WSGI using more daemon processes than it should? - django

So i set up a WSGI server running python/django code, and stuck the following in my httpd.conf file:
WSGIDaemonProcess mysite.com processes=2 threads=15 user=django group=django
However, when i go to the page and hit "refresh" really really quickly, it seems that i am getting way more than two processes:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21042 django 20 0 975m 36m 4440 S 98.9 6.2 0:15.63 httpd
1017 root 20 0 67688 2352 740 S 0.3 0.4 0:10.50 sendmail
21041 django 20 0 974m 40m 4412 S 0.3 6.7 0:16.36 httpd
21255 django 20 0 267m 8536 2036 S 0.3 1.4 0:01.02 httpd
21256 django 20 0 267m 8536 2036 S 0.3 1.4 0:00.01 httpd
I thought setting processes=2 would limit it to two processes. Is there something i'm missing?

Related

Django 1.8, mod_wsgi, apache 2.4 setup on CentOS 7 Not Working

I'm having a hell of a time with a server error that sounds like it has an obvious solution, but isn't working out that way:
[:error] [pid 10979] (13)Permission denied: [remote xxx.xx.xxx.xxx:20] mod_wsgi (pid=10979, process='testsite', application='xxx.xxx.xx.xx|/testsite'): Call to fopen() failed for '/home/jnett/testsite/testsite/wsgi.py'.
For obvious reasons, I took out the actual server addresses. I've set up so many django sites in RedHat, earlier CentOS distributions, OSX, and even Ubuntu, and took lots of notes in all cases. Still, I cannot seem to get this configuration right.
So here it is:
1.
My apache configuration is
VirtualHost *:80>
ServerName xxx.xxx.xx.xx
ServerAlias *.xxx.xxx.xx.xx
WSGIDaemonProcess testsite python-path=/home/jnett/testsite:/home/jnett/airview_env/lib/python2.7/site-packages
WSGIScriptAlias /testsite /home/jnett/testsite/testsite/wsgi.py process-group=testsite
Alias /static /home/jnett/testsite/static
<Directory /home/jnett/testsite/testsite>
Require all granted
</Directory>
<Location /home/jnett/testsite/static>
Options -Indexes
</Location>
/VirtualHost>
(I intentionally wrote "< VirtualHost >" as "VirtualHost >" for formatting issues here. )
2.
My wsgi.py file is located where it should be according to the above apache configuration and contains:
import os, sys
sys.path.append( '/home/jnett/airview_env/lib/python2.7/site-packages' )
sys.path.append( '' )
from django.core.wsgi import get_wsgi_application
os.environ["DJANGO_SETTINGS_MODULE"] = "testsite.settings"
application = get_wsgi_application()
3.
My user directory has permissions 755.
4.
My project directory testsite has 777 permissions (just as a sanity check here, which is driving my insanity on this problem) on the parent directory and recursively for everything inside.
Further, apache also has ownership.
~ ]$ls -all
drwxrwxrwx. 4 jnett apache 50 Jan 4 22:23 testsite
~ ]$cd testsite/
~/testsite ]$ls -all
total 8
drwxrwxrwx. 4 jnett apache 50 Jan 4 22:23 .
drwxr-xr-x. 9 jnett jnett 4096 Jan 4 23:56 ..
-rwxrwxrwx. 1 jnett apache 251 Dec 30 16:52 manage.py
drwxrwxrwx. 2 jnett apache 6 Jan 4 22:23 static
drwxrwxrwx. 2 jnett apache 70 Jan 4 23:56 testsite
~/testsite ]$cd testsite/
~/testsite/testsite ]$ls -all
total 12
drwxrwxrwx. 2 jnett apache 70 Jan 4 23:56 .
drwxrwxrwx. 4 jnett apache 50 Jan 4 22:23 ..
-rwxrwxrwx. 1 jnett apache 0 Dec 30 16:52 __init__.py
-rwxrwxrwx. 1 jnett apache 2644 Dec 30 16:52 settings.py
-rwxrwxrwx. 1 jnett apache 758 Dec 30 16:52 urls.py
-rwxrwxrwx. 1 jnett apache 554 Jan 4 22:48 wsgi.py
5.
The django version is 1.8 and apache version is 2.4.
6.
selinux is set to permissive, because I know that can sometimes cause issues.
I've tried every possible little tweak to the above settings and to the permissions of everything it's pointing to, but I still cannot get any result other than the above "permission denied" error. Yes, I've googled around as much as I could on this error; No, none of the things I found produced a solution. Yes, I've also scoured the django and mod_wsgi documentation, so please don't just post a link and nothing else unless you noticed a specific obvious discrepancy.
I've staring at this for quite some time, so I'm hoping a fresh set of eyes can catch something I haven't thought of yet.
It really was an selinux problem--I misunderstood how to set it to permissive. Changing the value in /etc/selinux/config to permissive does NOT actually put selinux in a permissive state until the server is rebooted.

Solution for error using Apache 2.2 and mod_wsgi - Deploying django and error "Forbidden - You don't have permission to access / on this server. "

Situation:
I want to deploy a django app, and the app works fine in the server without apache, but when I try to set up the django app behind apache this error is shown.
Error Message:
Forbidden
You don't have permission to access / on this server.
Apache/2.2.15 (Red Hat) Server at 10.184.56.52 Port 80
Apache Version :
admin#fcovsopti03:/etc/httpd/conf$ /usr/sbin/httpd -V
Server version: Apache/2.2.15 (Unix)
Server built: Aug 18 2015 02:00:22
Server's Module Magic Number: 20051115:25
Server loaded: APR 1.3.9, APR-Util 1.3.9
Compiled using: APR 1.3.9, APR-Util 1.3.9
Architecture: 64-bit
Server MPM: Prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APACHE_MPM_DIR="server/mpm/prefork"
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_SYSVSEM_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=128
-D HTTPD_ROOT="/etc/httpd"
-D SUEXEC_BIN="/usr/sbin/suexec"
-D DEFAULT_PIDLOG="run/httpd.pid"
-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_LOCKFILE="logs/accept.lock"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="conf/mime.types"
-D SERVER_CONFIG_FILE="conf/httpd.conf"
admin#fcovsopti03:/etc/httpd/conf$
Apache and Module mod_wsgi installation
$ yum install python27-mod_wsgi.x86_64
Apache is working, but after adding the following lines and restarting the apache service the error is shown
httpd.conf - adding configuration directives for django using mod_wsgi :
WSGIScriptAlias / /u05/admin/NPortalWeb/src/TI/wsgi.py
WSGIPythonPath /u05/admin/NPortalWeb/src/TI:/u05/admin/NPortalWeb/site-packages
<Directory /u05/admin/NPortalWeb/src/>
<Files wsgi.py>
Order allow,deny
Allow from all
</Files>
</Directory>
Users and Groups
root#fcovsopti03:/etc/httpd/conf# id apache
uid=48(apache) gid=48(apache) groups=48(apache),903(admin)
root#fcovsopti03:/etc/httpd/conf# id admin
uid=903(admin) gid=903(admin) groups=903(admin),48(apache)
Directory permissions
admin#fcovsopti03:/u05/admin/NPortalWeb/src$ ll
total 148
-rwxr-xr-x. 1 admin admin 131072 Sep 24 16:45 db.sqlite3
drwxr-xr-x. 4 admin admin 4096 Sep 30 14:52 DisponibilidadTI
-rwxr-xr-x. 1 admin admin 245 Sep 24 16:45 manage.py
-rwxr-xr-x. 1 admin admin 3446 Sep 30 14:53 nohup.out
drwxr-xr-x. 2 admin admin 4096 Sep 30 14:52 TI
admin#fcovsopti03:/u05/admin/NPortalWeb/src$
Linux Version:
admin#fcovsopti03:/u05/admin/NPortalWeb/src$ lsb_release -i -r
Distributor ID: RedHatEnterpriseServer
Release: 6.5
Thank you for any suggestion or idea.

Strange uwsgi mountpoint for my django site with nginx

I hit a strange issue with my site(nginx1.7+uwsgi2.0+django1.6).
Today I see there are some strange log entries in my uwsgi logs.
Snippet here:
Mon Aug 31 10:43:17 2015 - WSGI app 1 (mountpoint='zc.qq.com|') ready in 0 seconds on interpreter 0xf627c0 pid: 18360
zc.qq.com {address space usage: 421933056 bytes/402MB} {rss usage: 102522880 bytes/97MB} [pid: 18360|app: 1|req: 1/7] 61.132.52.107 () {42 vars in 684 bytes} [Mon Aug 31 10:43:17 2015] GET /cgi-bin/common/attr?id=260714&r=0.6131902049963026 => generated 0 bytes in 6113 msecs (HTTP/1.1 301) 4 headers in 210 bytes (2 switches on core 0)
zc.qq.com {address space usage: 421933056 bytes/402MB} {rss usage: 102522880 bytes/97MB} [pid: 18360|app: 1|req: 2/8] 61.132.52.105 () {44 vars in 986 bytes} [Mon Aug 31 10:43:29 2015] GET /cgi-bin/common/attr?id=260714&r=0.1676001222494321 => generated 0 bytes in 3 msecs (HTTP/1.1 301) 4 headers in 210 bytes (2 switches on core 0)
Actually, zc.qq.com is nothing to do with my site.
So, how this guy comes into my server?
It sits here as an wsgi app, so, sometimes keep restarting together with my own django app, thus, sometimes it takes django app more than 5 secs to respond to http request.
I see the pid for strange app is 18360. So, here:
[root#localhost uwsgi]# ps -ef|grep uwsgi
root 18352 1 0 10:40 ? 00:00:00 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18353 18352 0 10:40 ? 00:00:00 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18354 18352 0 10:40 ? 00:00:00 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18355 18352 0 10:40 ? 00:00:00 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18356 18352 0 10:40 ? 00:00:00 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18357 18352 0 10:40 ? 00:00:12 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18358 18352 0 10:40 ? 00:00:00 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18359 18352 0 10:40 ? 00:00:13 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18360 18352 1 10:40 ? 00:00:18 uwsgi -x /home/uwsgi/uwsgi2.xml
root 18871 18818 0 11:07 pts/2 00:00:00 grep uwsgi
It comes from uwsgi...But here is uwsgi config file:
<uwsgi>
<socket>/var/run/uwsgi.socket</socket>
<listen>100</listen>
<master>true</master>
<vhost>true</vhost>
<no-site>true</no-site>
<pidfile>/usr/local/nginx/uwsgi.pid</pidfile>
<processes>8</processes>
<profiler>true</profiler>
<memory-report>true</memory-report>
<enable-threads>true</enable-threads>
<logdate>true</logdate>
<limit-as>6048</limit-as>
<daemonize>/home/django.log</daemonize>
</uwsgi>
And, here is snippet in nginx.conf(domainname is just sample here)
server {
listen 80;
server_name www.mysite.com;
location / {
uwsgi_pass unix:///var/run/uwsgi.socket;
include uwsgi_params;
uwsgi_param UWSGI_CHDIR /home/mysite;
uwsgi_param UWSGI_SCRIPT wsgi;
access_log off;
}
location /static/ {
root /home/mysite/;
access_log off;
log_not_found off;
autoindex on;
}
}
So, exactly nothings in any config files related to zc.qq.com or other strage domains(I also see proxyjudge.info).
Anyone hit this before?
Thanks.
Wesley
It is because you have enabled virtualhosting and dynamic apps but you do not make any check from the nginx side. The first request for a non-configured (in uWSGI) domain, will result in a new app to load

Tuning gunicorn (with Django): Optimize for more concurrent connections and faster connections

I use Django 1.5.3 with gunicorn 18.0 and lighttpd. I serve my static and dynamic content like that using lighttpd:
$HTTP["host"] == "www.mydomain.com" {
$HTTP["url"] !~ "^/media/|^/static/|^/apple-touch-icon(.*)$|^/favicon(.*)$|^/robots\.txt$" {
proxy.balance = "hash"
proxy.server = ( "" => ("myserver" =>
( "host" => "127.0.0.1", "port" => 8013 )
))
}
$HTTP["url"] =~ "^/media|^/static|^/apple-touch-icon(.*)$|^/favicon(.*)$|^/robots\.txt$" {
alias.url = (
"/media/admin/" => "/var/www/virtualenvs/mydomain/lib/python2.7/site-packages/django/contrib/admin/static/admin/",
"/media" => "/var/www/mydomain/mydomain/media",
"/static" => "/var/www/mydomain/mydomain/static"
)
}
url.rewrite-once = (
"^/apple-touch-icon(.*)$" => "/media/img/apple-touch-icon$1",
"^/favicon(.*)$" => "/media/img/favicon$1",
"^/robots\.txt$" => "/media/robots.txt"
)
}
I already tried to run gunicorn (via supervisord) in many different ways, but I cant get it better optimized than it can handle about 1100 concurrent connections. In my project I need about 10000-15000 connections
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 9 -k gevent --preload --settings=myproject.settings
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 10 -k eventlet --worker_connections=1000 --settings=myproject.settings --max-requests=10000
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 20 -k gevent --settings=myproject.settings --max-requests=1000
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 40 --settings=myproject.settings
On the same server there live about 10 other projects, but CPU and RAM is fine, so this shouldnt be a problem, right?
I ran a load test and these are the results:
At about 1100 connections my lighttpd errorlog says something like that, thats where the load test shows the drop of connections:
2013-10-31 14:06:51: (mod_proxy.c.853) write failed: Connection timed out 110
2013-10-31 14:06:51: (mod_proxy.c.939) proxy-server disabled: 127.0.0.1 8013 83
2013-10-31 14:06:51: (mod_proxy.c.1316) no proxy-handler found for: /
... after about one minute
2013-10-31 14:07:02: (mod_proxy.c.1361) proxy - re-enabled: 127.0.0.1 8013
These things also appear ever now and then:
2013-10-31 14:06:55: (network_linux_sendfile.c.94) writev failed: Connection timed out 600
2013-10-31 14:06:55: (mod_proxy.c.853) write failed: Connection timed out 110
...
2013-10-31 14:06:57: (mod_proxy.c.828) establishing connection failed: Connection timed out
2013-10-31 14:06:57: (mod_proxy.c.939) proxy-server disabled: 127.0.0.1 8013 45
So how can I tune gunicorn/lighttpd to serve more connections faster? What can I optimize? Do you know any other/better setup?
Thanks alot in advance for your help!
Update: Some more server info
root#django ~ # top
top - 15:28:38 up 100 days, 9:56, 1 user, load average: 0.11, 0.37, 0.76
Tasks: 352 total, 1 running, 351 sleeping, 0 stopped, 0 zombie
Cpu(s): 33.0%us, 1.6%sy, 0.0%ni, 64.2%id, 0.4%wa, 0.0%hi, 0.7%si, 0.0%st
Mem: 32926156k total, 17815984k used, 15110172k free, 342096k buffers
Swap: 23067560k total, 0k used, 23067560k free, 4868036k cached
root#django ~ # iostat
Linux 2.6.32-5-amd64 (django.myserver.com) 10/31/2013 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
33.00 0.00 2.36 0.40 0.00 64.24
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 137.76 980.27 2109.21 119567783 257268738
sdb 24.23 983.53 2112.25 119965731 257639874
sdc 24.25 985.79 2110.14 120241256 257382998
md0 0.00 0.00 0.00 400 0
md1 0.00 0.00 0.00 284 6
md2 1051.93 38.93 4203.96 4748629 512773952
root#django ~ # netstat -an |grep :80 |wc -l
7129
Kernel Settings:
echo "10152 65535" > /proc/sys/net/ipv4/ip_local_port_range
sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=250000
sysctl -w net.ipv4.tcp_max_syn_backlog=2500
sysctl -w net.core.netdev_max_backlog=2500
ulimit -n 10240

Django + lighttpd + fcgi performance

I am using Django to handle fairly long http post requests and I am wondering if my setup has some limitations when I received many requests at the same time.
lighttpd.conf fcgi:
fastcgi.server = (
"a.fcgi" => (
"main" => (
# Use host / port instead of socket for TCP fastcgi
"host" => "127.0.0.1",
"port" => 3033,
"check-local" => "disable",
"allow-x-send-file" => "enable"
))
)
Django init.d script start section:
start-stop-daemon --start --quiet \
--pidfile /var/www/tmp/a.pid \
--chuid www-data --exec /usr/bin/env -- python \
/var/www/a/manage.py runfcgi \
host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
Starting Django using the script above results in a multi-threaded Django server:
www-data 342 7873 0 04:58 ? 00:01:04 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 343 7873 0 04:58 ? 00:01:15 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 378 7873 0 Feb14 ? 00:04:45 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 382 7873 0 Feb12 ? 00:14:53 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 386 7873 0 Feb12 ? 00:12:49 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
www-data 7873 1 0 Feb12 ? 00:00:24 python /var/www/a/manage.py runfcgi host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid
In lighttpd error.log, I do see load = 10 which shows I am getting many requests at the same time, this happens few times a day:
2010-02-16 05:17:17: (mod_fastcgi.c.2979) got proc: pid: 0 socket: tcp:127.0.0.1:3033 load: 10
Is my setup correct to handle many long http post requests (can last few minutes each) at the same time ?
I think you may want to configure your fastcgi worker to run multi-processed, or multi-threaded.
From manage.py runfcgi help:
method=IMPL prefork or threaded (default prefork)
[...]
maxspare=NUMBER max number of spare processes / threads
minspare=NUMBER min number of spare processes / threads.
maxchildren=NUMBER hard limit number of processes / threads
So your start command would be:
start-stop-daemon --start --quiet \
--pidfile /var/www/tmp/a.pid \
--chuid www-data --exec /usr/bin/env -- python \
/var/www/a/manage.py runfcgi \
host=127.0.0.1 port=3033 pidfile=/var/www/tmp/a.pid \
method=prefork maxspare=4 minspare=4 maxchildren=8
You will want to adjust the number of processes as needed. Note that the more FCGI processes you have, your memory usage will increase linearly. Also, if your processes are CPU-bound, having more processes than number of available CPU cores won't help much for concurrency.