Run django application on apache with uWSGI - django

I wanted to run my django application using apache and uWSGI. So I installed apache which use worker_module. When I finally run my app and tested its performance using httperf I noticed that system is able to serve only one user at the same time. The strange thing is that when I run uWSGI using the same command as below with nginx I can serve 97 concurrent users. Is it possible that apache works so slow?
My apache configuration looks like (most important elements - the extant settings are default):
<IfModule mpm_worker_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxClients 63
MaxRequestsPerChild 0
</IfModule>
...
<Location />
SetHandler uwsgi-handler
uWSGISocket 127.0.0.1:8000
</Location>
I run uwsgi using:
uwsgi --socket :8000 --chmod-socket --module wsgi_app --pythonpath /home/user/directory/uwsgi -p 6

I recommend that you put Apache behind Nginx. For example:
bind Apache to 127.0.0.1:81
bind nginx to 0.0.0.0:80
make nginx proxy domains that Apache should serve
It's not a direct answer to your question but that's IMHO the best solution:
best performance
best protection for Apache
allows to migrate Apache websites to Nginx step by step (uWSGI supports PHP now ...), again for best performance and security

Related

How to run Daphne and Gunicorn At The Same Time?

I'm using django-channels therefore I need to use daphne but for the static files and other things I want to use gunicorn. I can start daphne alongside gunicorn but I can not start both of them at the same time.
My question is should I start both of them at the same time or is there any better option?
If should I how can I do that?
Here is my runing server command:
gunicorn app.wsgi:application --bind 0.0.0.0:8000 --reload && daphne -b 0.0.0.0 -p 8089 app.asgi:application
PS:
I splited location of / and /ws/ for gunicorn and daphne in nginx.conf.
Your problem is you're calling both processes in the same context/line and one never gets called because the first one never "ends".
this process: gunicorn app.wsgi:application --bind 0.0.0.0:8000 --reload
won't terminate at any point so the && command never gets ran unless you kill that command manually, and at that point I'm not sure that wouldn't kill the whole process chain entirely.
if you want to run both you can background both processes with & e.g.
(can't test but this should work)
gunicorn app.wsgi:application --bind 0.0.0.0:8000 --reload & daphne -b 0.0.0.0 -p 8089 app.asgi:application &
The teach a man to fish info on this type of issue is here
I'm fairly certain you will lose the normal console logging you would normally have by running these in the background, as such i'd suggest looking into nohup instead of & or sending the logs somewhere with a logging utility so you aren't flying blind.
As for other options if you plan to scale up to a large number of users, probably 100+ I would just run two servers, one for wsgi django http requests and one for asgi daphne ws requests. Have nginx proxy between the two for whatever you need and you're done. That's also what channels recommends for larger applications.
It is good practice to use a common path prefix like /ws/ to distinguish WebSocket connections from ordinary HTTP connections because it will make deploying Channels to a production environment in certain configurations easier.
In particular for large sites it will be possible to configure a production-grade HTTP server like nginx to route requests based on path to either (1) a production-grade WSGI server like Gunicorn+Django for ordinary HTTP requests or (2) a production-grade ASGI server like Daphne+Channels for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy where Daphne serves all requests - HTTP and WebSocket - rather than having a separate WSGI server. In this deployment configuration no common path prefix like /ws/ is necessary.
it is not needed to run both.Daphne is a HTTP, HTTP2 and WebSocket protocol server.
take a look at README at this link:
https://github.com/django/daphne

Apache , uwsgi , django lookup time

I have my setup hosted in AWS EC2, on am ubuntu machine, running a django server with uwsgi and apache. I've been trying to figure out for a while why the dev env VS local env have such different performance.
With local server i return my index.html page in 80ms and in dev it takes almost 1s.
I have django-debug-toolbar implemented and the CPU time is 300ms but chrome says the loading time is 1.3s (Waiting (TTFB)).
Other big difference is that when i open then page with the URL it takes 1s but if I enter the server's IP it loads in 300ms.
I already tried everything and can't figure why the loading difference.
My apache virtual host:
<VirtualHost *:80>
<Location />
Options FollowSymLinks Indexes
SetHandler uwsgi-handler
uWSGISocket 127.0.0.1:3031
</Location>
</VirtualHost>
uWsgi conf:
[uwsgi]
socket = 127.0.0.1:3031
chdir = /home/ubuntu/production/<mysite>
processes = 4
threads = 2
wsgi-file=<mysite/project>/wsgi.py
virtualenv=/home/ubuntu/production
venv = /home/ubuntu/production
buffer-size=32768
For those who face a similar problem:
I figured out that my problem was with cookies. I was keeping track of the browsing history inside my site as an array. Still didn't figure out the technical reason why it was slowing my request, but that was the problem.

Running apache bloodhound on apache2 web server

I am trying to run to apache bloodhound tracker on apache2 web server. I am using 0.7 version of the blood hound. I followed the website https://issues.apache.org/bloodhound/wiki/BloodhoundInstall
Blood hound is running on port 8000.
But the problem is I am not able to run the blood hound on port 80, so that if I hit bloodhound.mydomain.com, I should get bloodhound. I have mentioned my apache2 webserver setting file as specified in the website
/etc/apache2/sites-available/bloodhound
<VirtualHost *:8080>
WSGIDaemonProcess bh_tracker user=ubuntu python-path=/home/ubuntu/bloodhound-0.7/installer/bloodhound/lib/python2.7/site-packages
WSGIScriptAlias /bloodhound /home/ubuntu/bloodhound-0.7/installer/bloodhound/site/cgi-bin/trac.wsgi
<Directory /home/ubuntu/bloodhound-0.7/installer/bloodhound/site/cgi-bin>
WSGIProcessGroup bh_tracker
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
<LocationMatch "/bloodhound/[^/]+/login">
AuthType Digest
AuthName "ubuntu"
AuthDigestDomain /bloodhound
AuthUserFile /home/ubuntu/bloodhound-0.7/installer/bloodhound/environments/main/bloodhound.htdigest
Require valid-user
</LocationMatch>
</VirtualHost>
After adding the above file, its not running on either of the ports 8000 and also 8080 also.
How do I make it run. Kindly help me. By the way I am using ubuntu ec2 instance.
By golly I think I've figured it out! I've been stuck right about where you are on my own Bloodhound port configuration for days.
n3storm is correct: the whole magic of setting up mod_wsgi is that you no longer need to manually start bloodhound with that
tracd port=8080 /ridiculously/long/path/to/bloodhound/installer/bloodhound/environments/main
command. Instead, mod_wsgi runs all that python for you the moment your web browser requests http://[host]:8080/bloodhound, meaning your Bloodhound server is ready to serve the moment it's turned on.
The pain is how many interlocking config files are involved, and how many tiny things can break down the whole process. I don't really know python, I just barely understand Apache, and I'm 70% confident I've accidentally opened some gaping security that I don't understand, but here's my understanding of the mod_wsgi + Apache + Bloodhound domino chain. Paths are for my Apache 2.4 installation on Ubuntu 14.04.1 LTS:
1. You load http://[host]:8080/bloodhound
For this to work, I needed to edit /etc/apache2/ports.conf so that Apache is actually listening on port 8080. So add the line
Listen 8080
to /etc/apache2/ports.conf
Now visiting http://[host]:8080/bloodhound should at least show you something from Apache. For me, it was a HTTP Error 403: Forbidden page, and next up is my home remedy for the Error 403 blues!
2. Apache triggers bloodhound.conf
FULL PATH: /etc/apache2/sites-available/bloodhound.conf
Technically, Apache is looking in /etc/apache2/sites-enabled/ for a matching VirtualHost rule but you set this up by creating/editing .conf files in /sites-availabe/ and then activating them with the Apache command
a2ensite [sitename].conf
So. Apparently, Apache 2.4 changed its access control syntax for .conf files. So, to stop the Error 403ing, I changed
Order deny,allow
Allow from all
in /etc/apache2/sites-available/bloodhound.conf to
Require all granted
And then once again you should restart Apache with
sudo apachectl graceful
or
sudo /etc/init.d/apache2 graceful
or maybe
sudo service apache2 restart
I'm not sure, they all seem to work equally but I suppose the graceful ones are nice because they don't shut down your server or something important like that.
3. bloodhound.conf triggers trac.wsgi
FULL PATH: /ridiculously/long/path/to/bloodhound/installer/bloodhound/site/cgi-bin/trac.wsgi
After figuring out that ton of other things, I realized that, in the end, the default script that Bloodhound generates worked fine for me:
import os
def application(environ, start_request):
if not 'trac.env_parent_dir' in environ:
environ.setdefault('trac.env_path', '/usr/local/bloodhound/installer/bloodhound/environments/main')
if 'PYTHON_EGG_CACHE' in environ:
os.environ['PYTHON_EGG_CACHE'] = environ['PYTHON_EGG_CACHE']
elif 'trac.env_path' in environ:
os.environ['PYTHON_EGG_CACHE'] = \
os.path.join(environ['trac.env_path'], '.egg-cache')
elif 'trac.env_parent_dir' in environ:
os.environ['PYTHON_EGG_CACHE'] = \
os.path.join(environ['trac.env_parent_dir'], '.egg-cache')
from trac.web.main import dispatch_request
return dispatch_request(environ, start_request)
4. trac.wsgi serves up the HTML files for Bloodhound
Isn't the internet just magical?
By using Apache mod_wsgi you don't need Bloodhound running apart anymore. Is mod_wsgi what makes Bloodhound running. You should use standard apache port in this case.
Also, I guess you should use a ServerName directive at Virtualhost (or is it you only serve one host?)

ProxyPass and ProxyPassReverse for Django app

I have a follow up question from my original Django serving question which was how to develop Django apps and serve them from the same server as my main PHP-based site (all part of a larger migration of my website from a static and PHP driven one to a series of Django apps).
I couldn't quite use the name server solution I was provided with, and instead just deployed my Django applications on a different port (8000) using mod_wsgi. Now, however, I need to actually integrate the Django application into the main site. In my Apache 2.0 configuration file (for say http://www.example.com) I added the following ProxyPass commands (after my mod_wsgi initialization):
ProxyPass /app/newsletter http://www.example.com:8000/app/newsletter
ProxyPassReverse /app/newsletter http://www.example.com:8000/app/newsletter
Here I expect that any request to:
http://www.example.com/app/newsletter
will get successfully proxied to:
http://www.example.com:8000/app/newsletter
and all will be right with the world.
However, this is not the case. Apache hangs for 5 or so minutes (the time taken to craft this question) then spits out a 502 Proxy Error:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /app/newsletter/.
Reason: Error reading from remote server
Watching my Apache 2.0 error log after this response I see continuous errors like the following:
[Thu Sep 27 15:25:49 2012] [error] Exception KeyError: KeyError(****,) in <module 'threading' from '/usr/lib64/python2.6/threading.pyc'> ignored
So something seems to be remiss in either how mod_proxy plays with Django and/or Python. I have other Java related processes that I use ProxyPass and ProxyPassReverse on and they work fine. Also, when I don't try to apply the ProxyPass the Django apps all work well (ie. when I address them on port 8000 directly).
Any ideas? I don't feel like what I am doing is particularly complex.
Thanks in advance.
In the end, using mod_rewrite was the solution. Added this to Apache's httpd.conf file:
RewriteEngine On
RewriteRule ^/app/newsletter/(.*)$ http://%{SERVER_NAME}:8000%{REQUEST_URI} [P]
Everything works as expected.

Why do we need uwsgi for hosting Django on nGINX

Lets see:
Django is WSGI compatible.
WSGI is Web Server Gateway Interface
Now, Nginx is a server. So we should be able to communicate with Django. So then why do we need uWSGI in between??
All say that uWSGI is a server which speaks wsgi protocol.
Then what is uwsgi protocol. How does it differ from WSGI (which is a protocol/specification)
And again, why do we find the combination Django + uWSGI + Nginx ??
Cant I speak WSGI between nginx & django?? Coz WSGI itself means to be an specification between WebServer (nginx) and Web Applications (django)
WSGI is specifically a Python interface, whereas Nginx is a general webserver. So at a minimum you need something between Nginx and Django that translates the standard http request into a WSGI one.
uWSGI is just one of several popular WSGI servers. Others include gunicorn and mod_wsgi (an Apache module which necessitates having Apache installed too). uWSGI happens to be my preferred one and nginx now has native support for its protocol, so you won't go to far wrong by using it.