IIS 7.5 crashes after a few requests (with Django + PyISAPIe) - django

I managed to run Django using IIS as webserver (using PyISAPIe) and everything goes well in my test server, mounting Windows 2008 Server R2 64bit.
Then I installed the application on another server with the same configuration and it works fine for the first request. Then when I reload the page, I get a "Service not working" page.
On the event log I see an Application error saying that python26.dll had some problems:
Faulting application name: w3wp.exe
Faulting module name: python26.dll
Exception code: 0x40000015
Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe
Faulting module path: C:\Windows\system32\python26.dll
Can you give me some hint on how to solve that problem?
UPDATE: "Rapid-Fail Protection" in the Advanced Settings of the Application Pool was set to 5 failures; disabling it, all worked well.
So, now the question is: how can I detect what caused the failures?
UPDATE: I discovered that IIS crashes when there are multiple requests (img, css, js). PyISAPIe is called for each of them, passing them to static server once recognized.
No idea why this happens...

PyISAPIe is not a good choice to run Django on Windows 2008. In this article you can find better solution: Running Django on Windows (with performance tests)

Check the eventlog it should be in there.
You may also find some more detail in the httperror log (C:\Windows\System32\LogFiles\HTTPERR).

I discovered that IIS crashes when there are multiple requests (img,
css, js). PyISAPIe is called for each of them, passing them to static
server once recognized. No idea why this happens...
Do multiple request cause the error on both machines? When there are multiple requests in an ISAPI application, each request runs in its own thread. The Python multi-threading model is GLOBAL - all threads running under that Python process are co-mingled and sharing all global resources, so you must serialize all multi-threaded code running in all applications and processes using your Python engine. This is a serious downside in Python multi-threaded processing and may be the source of your problems. See http://docs.python.org/library/multiprocessing.html and other sources.
But even this only occurs on one machine and not the other, that may still be the cause - it could also depend on many other environmental variables - number of requests, resources of the machine, processors, etc.

Check memory usage on machine (total physical).

Related

Django - Development server alternatives

Is there good alternatives to the django developement server (runserver) that are more performant,
especially in concurency and static serving, and that have the auto-reload function, without having to setup a full blown production environment ?
Im working on Windows so gunicorn cannot be used.
You can install and use the rungevent commant. It has auto-reload function and it's more performant than thread-based servers (it is greenlet-oriented). The only caveat is the static file serving: you must install a webserver or proxy like nginx for that.
Are you doing so high bulk tests in ur dev server so you suffer this -specially regarding static files-? If so, then you must emulate, as said, a productive environment (just have an nginx correctly configured pointing to the address:port you use for your rungevent command).
If static files is not your problem, install a rungevent command and try how it works.
No since dev sites are made to handle limited requests, runserver runs fine on a machine that can match the requirements of your app.
If you are dealing with a large scale dev project which your system cannot tolerate, then it's either time to reproduce a production environment or upgrade.
I find it difficult to believe that your application is that bad in terms of performance, again if you are trying to test the behavior of a full production site (in terms of DB entries etc) then its time to emulate the production environment.
If that is not the case, then I would start checking the underlying models / code of the project.
Well, if you don't want to use django dev server you will have to spend some time to setup anyway. But the good part is that you can do it only once. Sequential deploying will take very little time.
Not so much time ago I switched from fastcgi to uWSGI and it made my life much easier.
uWSGI is awesome! It has autoreload (which works both in daemon mode and when launched directly in terminal). When launched in terminal you can use debugger (e.g. pdb) during request just like you do in django dev-server. And of course you can debug with print in simple cases.
I'm using it with nginx which serves both static and uWSGI but it of course can be any server.
The most useful feature for me in this configuration is that you use the same thing both for dev and production. For simple projects after developing you just turn off autoreload and a few other options and it's ready.

Hawtio stops working after running for days

We are using hawtio to have a fancy and nice web interface for seeing JMX MBeans and Camel Route in our project. However, we have noticed that after weeks running Hawtio stops working and we are getting a Jetty error when trying to access it.
We are using hawtio in standalone mode, version 1.2.0/offline. Also I guess it worth to mention that our Camel routes are pretty heavy and consume many resources (not sure if that impacts hawtio). When trying to access we get this:
HTTP ERROR 404
Problem accessing /ourContextPath/. Reason:
Not Found
Powered by Jetty://
It seems like there is no active resource for our context path and I something went wrong like a thread stopped working or something.
Does anybody have any idea how to solve this or how to find what's causing this? Also, is this a known bug fixed in the latest version (1.2.1)?
Jetty needs a work/temp directory to operate.
Default behavior is to use whatever java.io.tmpdir points to.
However, on many unix installations, this points to /tmp, and that directory is often cleaned out by other processes.
To fix, either specify a java.io.tmpdir to be somewhere other than /tmp
$ java -Djava.io.tmpdir=/var/run/jetty -jar start.jar
or create a ${jetty.base}/work/ directory (if running Jetty 9.1+)
or create a ${jetty.home}/work/ directory (if running versions of Jetty prior to 9.1)
See the answer at Jetty: Starts in C:\Temp for more details on how this work/temp directory operates and is configured.

Django Reloading on every request

I wanted to load some data and keep it in memory as in application scope. Basing on this and other posts in stackoverflow, I've put the required code snippet in settings.py, urls.py, models.py. I also put print statements to see when it gets executed. I see all the print statements in the server log with every request.
The following are the version details:
Linux 2.6.32-358.el6.x86_64
Apache/2.2.15 (Unix)
Django 1.4
Python 2.7.4
Looks like django is re-loading for every request. I also looked into this and confirmed with the admin that MaxRequestsPerChild is NOT 1.
If you are running in mod_wsgi embedded mode, you will have a multi process configuration, thus can take a while to warm up all processes with your code. Also, Apache will kill off idle processes and so you will see process churn. So what you may be seeing is the result of that.
Add to your debug code the printing out of the process ID to confirm this.
The easiest thing to do is use mod_wsgi daemon mode and restrict yourself to a small fixed number of persistent processes.
Also go watch my PyCon talk about this sort of stuff at:
http://lanyrd.com/2013/pycon/scdyzk/

Serve multiple Django and PHP projects on the same machine?

The documentation states that one should not server static files on the same machine as as the Django project, because static content will kick the Django application out of memory. Does this problem also come from having multiple Django projects on one server ? Should I combine all my Website-Projects into one very large Django project ?
I'm currently serving Django along with php scripts from Apache with mod WSGI. Does this also cause a loss of efficiency ?
Or is the warning just meant for static content, because the problem arises when serving hundreds of files, while serving 20-30 different PHP / Django projects is ok ?
I would say that this setup is completely ok. Off course it depends on the hardware, load and the other projects. But here you can just try and monitor the usage/performance.
The suggestion to use different server(s) for static files makes sense, as it is more efficient for the ressources. But as long as one server performs good enough i don't see a reason to use a second one.
Another question - which has less to do with performance than with ease of use/configuration - is the decision if you really want to run everything on the same server.
For one setup with a bunch of smaller sites (and as well some php-legacy) we use one machine with four virtual servers:
webhead running nginx (and varnish)
database
simple apache2/php server
django server using gunicorn + supervisord
nginx handles all the sites, either proxying to the application-server or serving static content (via nas). I like this setup, as it is very easy to install and handle, as well it makes it simple to scale out one piece if needed. Bu
If the documentation says """one should not server static files on the same machine as as the Django project, because static content will kick the Django application out of memory""" then the documentation is very misleading and arguably plain wrong.
The one suggestion I would make if using PHP on same system is that you ensure you are using mod_wsgi daemon mode for running the Python web application and even one daemon process per Python web application.
Do not run the Python web application in embedded mode because that means you are running stuff in same process as mod_php and because PHP including extensions is not really multithread safe that means you have to be running prefork MPM. Running Python web applications embedded in Apache when running prefork MPM is a bad idea unless you know very well how to set up Apache properly for it. Don't set up Apache right and you get issues like as described in:
http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html
The short of it is that Apache configuration for PHP and Python need to be quite different. You can get around that though by using mod_wsgi daemon mode for the Python web application.

Django + WSGI: Refreshing Issues?

I'm developing a Django site. I'm making all my changes on the live server, just because it's easier that way. The problem is, every now and then it seems to like to cache one of the *.py files I'm working on. Sometimes if I hit refresh a lot, it will switch back and forth between an older version of the page, and a newer version.
My set up is more or less like what's described in the Django tutorials: http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/#howto-deployment-modwsgi
I'm guessing it's doing this because it's firing up multiple instances of of the WSGI handler, and depending on which handler the the http request gets sent to, I may receive different versions of the page. Restarting apache seems to fix the problem, but it's annoying.
I really don't know much about WSGI or "MiddleWare" or any of that request handling stuff. I come from a PHP background, where it all just works :)
Anyway, what's a nice way of resolving this issue? Will running the WSGI handler is "daemon mode" alleviate the problem? If so, how do I get it to run in daemon mode?
Running the process in daemon mode will not help. Here's what's happening:
mod_wsgi is spawning multiple identical processes to handle incoming requests for your Django site. Each of these processes is its own Python Interpreter, and can handle an incoming web request. These processes are persistent (they are not brought up and torn down for each request), so a single process may handle thousands of requests one after the other. mod_wsgi is able to handle multiple web requests simultaneously since there are multiple processes.
Each process's Python interpreter will load your modules (your custom Python files) whenever an "import module" is executed. In the context of django, this will happen when a new view.py is needed due to a web request. Once the module is loaded, it resides in memory, and so any changes you make to the file will not be reflected in that process. As more web requests come in, the process's Python interpreter will simply use the version of the module that is already loaded in memory. You are seeing inconsistencies between refreshes since each web request you are making can be handled by different processes. Some processes may have loaded your Python modules during earlier revisions of your code, while others may have loaded them later (since those processes had not received a web request).
The simple solution: Anytime you modify your code, restart the Apache process. Most times that is as simple as running as root from the shell "/etc/init.d/apache2 restart". I believe a simple reload works as well, which is faster, "/etc/init.d/apache2 reload"
The daemon solution: If you are using mod_wsgi in daemon mode, then all you need to do is touch (unix command) or modify your wsgi script file. To clarify scrompt.com's post, modifications to your Python source code will not result in mod_wsgi reloading your code. Reloading only occurs when the wsgi script file has been modified.
Last point to note: I only spoke about wsgi as using processes for simplicity. wsgi actually uses thread pools inside each process. I did not feel this detail to be relevant to this answer, but you can find out more by reading about mod_wsgi.
Because you're using mod_wsgi in embedded mode, your changes aren't being automatically seen. You're seeing them every once in a while because Apache starts up new handler instances sometimes, which catch the updates.
You can resolve this by using daemon mode, as described here. Specifically, you'll want to add the following directives to your Apache configuration:
WSGIDaemonProcess example.com processes=2 threads=15 display-name=%{GROUP}
WSGIProcessGroup example.com
Read the mod_wsgi documentation rather than relying on the minimal information for mod_wsgi hosting contained on the Django site. In partcular, read:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
This tells you exactly how source code reloading works in mod_wsgi, including a monitor you can use to implement same sort of source code reloading that Django runserver does. Also see which talks about how to apply that to Django.
http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html
http://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html
You can resolve this problem by not editing your code on the live server. Seriously, there's no excuse for it. Develop locally using version control, and if you must, run your server from a live checkout, with a post-commit hook that checks out your latest version and restarts Apache.