Apache , uwsgi , django lookup time - django

I have my setup hosted in AWS EC2, on am ubuntu machine, running a django server with uwsgi and apache. I've been trying to figure out for a while why the dev env VS local env have such different performance.
With local server i return my index.html page in 80ms and in dev it takes almost 1s.
I have django-debug-toolbar implemented and the CPU time is 300ms but chrome says the loading time is 1.3s (Waiting (TTFB)).
Other big difference is that when i open then page with the URL it takes 1s but if I enter the server's IP it loads in 300ms.
I already tried everything and can't figure why the loading difference.
My apache virtual host:
<VirtualHost *:80>
<Location />
Options FollowSymLinks Indexes
SetHandler uwsgi-handler
uWSGISocket 127.0.0.1:3031
</Location>
</VirtualHost>
uWsgi conf:
[uwsgi]
socket = 127.0.0.1:3031
chdir = /home/ubuntu/production/<mysite>
processes = 4
threads = 2
wsgi-file=<mysite/project>/wsgi.py
virtualenv=/home/ubuntu/production
venv = /home/ubuntu/production
buffer-size=32768

For those who face a similar problem:
I figured out that my problem was with cookies. I was keeping track of the browsing history inside my site as an array. Still didn't figure out the technical reason why it was slowing my request, but that was the problem.

Related

Apache throws ERROR 500: Internal Server Error when GET from localhost/internal network

I have a production server with apache and django installed using mod_wsgi.
The django application has a REST API that serves some info when a GET request is sent.
This has always worked fine on the develop server, were we ran django using manage.py in a screen. Now we created a production server with apache running django but this API returns Error 500 when running wget from localhost or other machines in the same network (using 192.168.X.X IP).
Here's the output from wget:
~$ wget localhost:80/someinfo
--2020-04-02 16:26:59-- http://localhost/someinfo
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2020-04-02 16:26:59 ERROR 500: Internal Server Error.
It seems that the connection succeeds, so I guess it's not an apache problem. The error comes from the API response.
The error in apache error.log looks like this:
127.0.0.1 - - [02/Apr/2020:14:24:36 +0000] "GET /someinfo HTTP/1.1" 500 799 "-" "Wget/1.19.4 (linux-gnu)"
question: what is the number after 500? Sometimes is 799 and other times is 803.
But if the request is done using the public IP of the server from outside (i.e. from the browser) the API works fine and I see the correct information.
I already checked django's allowed hosts and it was accepting localhost, and the 192.168.X.X IP of the other machine. In the end I left django's settings.py like this:
#ALLOWED_HOSTS = ['localhost', '127.0.0.1', '192.168.1.101']
ALLOWED_HOSTS = ['*']
Note: 192.168.1.101 is the machine that tries to make the GET request.
The final goal of all this is to be able to make a GET request from a python script running in that machine (which already works if django runs via manage.py).
My apache.conf:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
#DocumentRoot /var/www/html
Alias /static /home/myuser/myproject/django/static_root
<Directory /home/myuser/myproject/django/static_root>
Require all granted
</Directory>
<Directory /home/myuser/myproject/django/myproject_django>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess myproject python-home=/home/myuser/env python-path=/home/myuser/myproject/django
WSGIProcessGroup myproject
WSGIScriptAlias / /home/myuser/myproject/django/myproject_django/wsgi.py
</VirtualHost>
I tried running django via manage.py and the wget from localhost works just fine. The problem only appears when django is ran by apache.
I also tried the solution given in this post, but changing the line does not fix the error.
I have some doubts concerning this error:
how does apache run django?
does restarting apache2 service also restart django? (thus, reading again the settings.py)
Is there any other django settings file rather than the one I'm editing?
how can I see django logs? I don't have the console now so I can't see real time prints.
I appreciate a lot any help.
I finally managed to solve it myself.
It turns out wsgi handles requests from localhost or external IPs as different instance groups. So all I had to do is put
WSGIApplicationGroup %{GLOBAL}
in /etc/apache2/sites-available/000-default.conf

Running apache bloodhound on apache2 web server

I am trying to run to apache bloodhound tracker on apache2 web server. I am using 0.7 version of the blood hound. I followed the website https://issues.apache.org/bloodhound/wiki/BloodhoundInstall
Blood hound is running on port 8000.
But the problem is I am not able to run the blood hound on port 80, so that if I hit bloodhound.mydomain.com, I should get bloodhound. I have mentioned my apache2 webserver setting file as specified in the website
/etc/apache2/sites-available/bloodhound
<VirtualHost *:8080>
WSGIDaemonProcess bh_tracker user=ubuntu python-path=/home/ubuntu/bloodhound-0.7/installer/bloodhound/lib/python2.7/site-packages
WSGIScriptAlias /bloodhound /home/ubuntu/bloodhound-0.7/installer/bloodhound/site/cgi-bin/trac.wsgi
<Directory /home/ubuntu/bloodhound-0.7/installer/bloodhound/site/cgi-bin>
WSGIProcessGroup bh_tracker
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
<LocationMatch "/bloodhound/[^/]+/login">
AuthType Digest
AuthName "ubuntu"
AuthDigestDomain /bloodhound
AuthUserFile /home/ubuntu/bloodhound-0.7/installer/bloodhound/environments/main/bloodhound.htdigest
Require valid-user
</LocationMatch>
</VirtualHost>
After adding the above file, its not running on either of the ports 8000 and also 8080 also.
How do I make it run. Kindly help me. By the way I am using ubuntu ec2 instance.
By golly I think I've figured it out! I've been stuck right about where you are on my own Bloodhound port configuration for days.
n3storm is correct: the whole magic of setting up mod_wsgi is that you no longer need to manually start bloodhound with that
tracd port=8080 /ridiculously/long/path/to/bloodhound/installer/bloodhound/environments/main
command. Instead, mod_wsgi runs all that python for you the moment your web browser requests http://[host]:8080/bloodhound, meaning your Bloodhound server is ready to serve the moment it's turned on.
The pain is how many interlocking config files are involved, and how many tiny things can break down the whole process. I don't really know python, I just barely understand Apache, and I'm 70% confident I've accidentally opened some gaping security that I don't understand, but here's my understanding of the mod_wsgi + Apache + Bloodhound domino chain. Paths are for my Apache 2.4 installation on Ubuntu 14.04.1 LTS:
1. You load http://[host]:8080/bloodhound
For this to work, I needed to edit /etc/apache2/ports.conf so that Apache is actually listening on port 8080. So add the line
Listen 8080
to /etc/apache2/ports.conf
Now visiting http://[host]:8080/bloodhound should at least show you something from Apache. For me, it was a HTTP Error 403: Forbidden page, and next up is my home remedy for the Error 403 blues!
2. Apache triggers bloodhound.conf
FULL PATH: /etc/apache2/sites-available/bloodhound.conf
Technically, Apache is looking in /etc/apache2/sites-enabled/ for a matching VirtualHost rule but you set this up by creating/editing .conf files in /sites-availabe/ and then activating them with the Apache command
a2ensite [sitename].conf
So. Apparently, Apache 2.4 changed its access control syntax for .conf files. So, to stop the Error 403ing, I changed
Order deny,allow
Allow from all
in /etc/apache2/sites-available/bloodhound.conf to
Require all granted
And then once again you should restart Apache with
sudo apachectl graceful
or
sudo /etc/init.d/apache2 graceful
or maybe
sudo service apache2 restart
I'm not sure, they all seem to work equally but I suppose the graceful ones are nice because they don't shut down your server or something important like that.
3. bloodhound.conf triggers trac.wsgi
FULL PATH: /ridiculously/long/path/to/bloodhound/installer/bloodhound/site/cgi-bin/trac.wsgi
After figuring out that ton of other things, I realized that, in the end, the default script that Bloodhound generates worked fine for me:
import os
def application(environ, start_request):
if not 'trac.env_parent_dir' in environ:
environ.setdefault('trac.env_path', '/usr/local/bloodhound/installer/bloodhound/environments/main')
if 'PYTHON_EGG_CACHE' in environ:
os.environ['PYTHON_EGG_CACHE'] = environ['PYTHON_EGG_CACHE']
elif 'trac.env_path' in environ:
os.environ['PYTHON_EGG_CACHE'] = \
os.path.join(environ['trac.env_path'], '.egg-cache')
elif 'trac.env_parent_dir' in environ:
os.environ['PYTHON_EGG_CACHE'] = \
os.path.join(environ['trac.env_parent_dir'], '.egg-cache')
from trac.web.main import dispatch_request
return dispatch_request(environ, start_request)
4. trac.wsgi serves up the HTML files for Bloodhound
Isn't the internet just magical?
By using Apache mod_wsgi you don't need Bloodhound running apart anymore. Is mod_wsgi what makes Bloodhound running. You should use standard apache port in this case.
Also, I guess you should use a ServerName directive at Virtualhost (or is it you only serve one host?)

Django App is laggy when Apache2 is used vice development server

The setup is:
Windows XP VM (Stuck with this for the time being - we're on an Intranet)
Apache 2,
mod_wsgi
django 1.4
virtualenv
We only have two users at most using this application simultaneously
Everything works but there is significant delay (10-20 seconds) between the browser's request and the response sent back by the server.
If I replace the Apache2 web server with the Django development server (which I do not want to do in production) the app is very responsive. So my assumption is that the problem is with Apache2 configuration or mod_wsgi configuration.
I am not an Apache expert and have spent hours looking for the right settings to configure the Apache2 web server but have failed to find anything that will improve the response.
Any assistance would be greatly appreciated.
Here are the settings that I have either changed or added to my httpd.conf:
# ThreadsPerChild: constant number of worker threads in the server process
ThreadsPerChild 10
# Changed MaxRequestsPerChild 0 to 1 for Django
MaxRequestsPerChild 1
# For Django KeepAlive should be OFF
KeepAlive Off
WSGIApplicationGroup %{GLOBAL}
#######################################
WSGIScriptAlias / "C:/virtual_env/sitar_env2/cissimp/cissimp/wsgi.py"
WSGIPythonPath C:/virtual_env/sitar_env2/Lib/site-packages;C:/virtual_env/sitar_env2/cissimp
Alias /static "C:/virtual_env/sitar_env2/cissimp/cissimp/static"
<Directory "C:/virtual_env/sitar_env2/cissimp/cissimp">
<Files wsgi.py>
Order allow,deny
Allow from all
</Files>
</Directory>
##########################################
Dont set:
MaxRequestsPerChild 1
You are effectively restarting Apache on every request which means having to load the whole Django application on every request. You should not do that.

Can I serve one django app from 2 different apache vhosts?

I've got a django app that's currently available at dev.mydomain.com, and I'm about to move it to clientsdomain.com. I'm on Ubuntu so I'll run a2dissite dev.mydomain.com and then a2ensite clientsdomain.com.
My vhost files are identical except for the server name -
<VirtualHost 8.8.8.4>
ServerName dev.mydomain.com
#....
</virtualHost>
and
<VirtualHost 8.8.8.4>
ServerName clientdomain.com
#....
</virtualHost>
(obviously that's not my ip address)
I just want to know if I actually have to take down the dev vhost before I run my app from the live vhost. Can I run them both together? Are there any risks to having them up at the same time (if that's even possible).
Each Django application will serve requests coming in to its corresponding VirtualHost. So no, in theory they can run in parallel.
However, you don't get into much detail about your setup. For example, are they backed by the same database? In this case, you do realize the problem, right?

Run django application on apache with uWSGI

I wanted to run my django application using apache and uWSGI. So I installed apache which use worker_module. When I finally run my app and tested its performance using httperf I noticed that system is able to serve only one user at the same time. The strange thing is that when I run uWSGI using the same command as below with nginx I can serve 97 concurrent users. Is it possible that apache works so slow?
My apache configuration looks like (most important elements - the extant settings are default):
<IfModule mpm_worker_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxClients 63
MaxRequestsPerChild 0
</IfModule>
...
<Location />
SetHandler uwsgi-handler
uWSGISocket 127.0.0.1:8000
</Location>
I run uwsgi using:
uwsgi --socket :8000 --chmod-socket --module wsgi_app --pythonpath /home/user/directory/uwsgi -p 6
I recommend that you put Apache behind Nginx. For example:
bind Apache to 127.0.0.1:81
bind nginx to 0.0.0.0:80
make nginx proxy domains that Apache should serve
It's not a direct answer to your question but that's IMHO the best solution:
best performance
best protection for Apache
allows to migrate Apache websites to Nginx step by step (uWSGI supports PHP now ...), again for best performance and security