cpu load and django application that makes long-response-time requests to external API - django

I'm developing a web application in python for which each user request makes an API call to an external service and takes about 20 seconds to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes.
The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM.
The web app is developed in python which is served through Apache with mod_wsgi
My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ?

This really doesn't have anything to do with blocking; it does, but it doesn't. The 20 sec request is blocking one thread, so another has be utilized for the next request. Whereas with quick requests, the threads basically round-robin.
However, this really shouldn't be spiking your CPU output. Webservers have an upward limit of "workers" that get spawned and when they're all tied up, they're all tied up. It won't extend past the limit, so unless you've set or the default setting is higher than the box you have is capable of running, it shouldn't push your CPU that high.
Regardless, all that is merely informational, and doesn't really solve your problem. With such a long running request though, you should be offloading this from your webserver as quick as possible. The webserver should merely hand off the request to another process that can asyncronously handle it and then employ polling to notify the client when the response is ready. Node.js is used a lot in similar scenarios, but I really don't have enough experience with it to give you any real guidance beyond that.

You should look into using message queues to offload tasks so that your user requests are not blocked.
You could look into python libs kombu and celery to handle messages and tasks.

You are likely using prefork MPM with Apache and mod_wsgi embedded mode. This is a bad combination by default because Apache is setup for PHP and not fat Python web applications. Read:
http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html
which explains this exact sort of issue.
Use mod_wsgi daemon mode at the minimum, and preferably also change to worker MPM for Apache.

Related

How to warm up django web service before opening to public?

I'm running django web application on AWS ecs.
I'd like to warm up the server (hit the first request and it takes some time for django to load up) when deploying a new version.
Is there a way for warming up the server before registering it to the Application load balancer?
Edit
I'm using nginx + uwsgi
I asumed that you use mod_wsgi , because that is the behavior described here:
Q: Why do requests against my application seem to take forever, but then after a bit they all run much quicker?
A: This is because mod_wsgi by default performs lazy loading of any application. That is, an application is only loaded the first time
that a request arrives which targets that WSGI application. This means
that those initial requests will incur the overhead of loading all the
application code and performing any startup initialisation.
This startup overhead can appear to be quite significant, especially if using Apache prefork MPM and embedded mode. This is
because the startup cost is incurred for each process and with prefork
MPM there are typically a lot more processes that if using worker MPM
or mod_wsgi daemon mode. Thus, as many requests as there are processes
will run slowly and everything will only run full speed once code has
all been loaded.
Note that if recycling of Apache child processes or mod_wsgi daemon processes after a set number of requests is enabled, or for
embedded mode Apache decides itself to reap any of the child
processes, then you can periodically see these delayed requests
occurring.
Some number of the benchmarks for mod_wsgi which have been posted do not take into mind these start up costs and wrongly try to compare
the results to other systems such as fastcgi or proxy based systems
where the application code would be preloaded by default. As a result
mod_wsgi is painted in a worse light than is reality. If mod_wsgi is
configured correctly the results would be better than is shown by
those benchmarks.
For some cases, such as when WSGIScriptAlias is being used, it is actually possible to preload the application code when the processes
first starts, rather than when the first request arrives. To preload
an application see the WSGIImportScript directive.
I think you may try to use WSGIScriptAlias see more here
I just changed health check from nginx based ones to uwsgi related ones,
create an endpoint in django, and let the ELB use that as health check

Configure uwsgi server for performance

I am deploying a uwsgi server for a django app. Each request will have a latency around 2 seconds. I need to handle 100 QPS. On a 4 cores machines, how should I configure the number of processes and the number of threads? I tried to play with the values but I do not understand what I am doing.
Go through the uWSGI Things to know page. 100 requests per second should be easily attainable with uWSGI.
Based on uWSGI behavior I've experienced, I would recommend that you start with only processes and don't use any threads. With both processes and threads we observed that there seemed to be an affinity to use threads over processes. That resulted in a single process handling all requests until it's thread pool was fully occupied and only then were requests handled by the next process. This resulted in poor utilization of resources as a single core was maxed out with all other idle. Turning off threading resulted in a massive performance boost for our particular use model.
Your experience may be different. The uWSGI authors stress that there isn't any magic config combination- it's completely dependent on your particular use case. You need benchmark your app against various configurations to find the sweet spot. Additionally, unless you're able to use benchmarks that perfectly model your actual production load, you'll want to continue to monitor performance and methodically tweak settings after you deploy.
From the Things to know page:
There is no magic rule for setting the number of processes or threads
to use. It is very much application and system dependent. Simple math
like processes = 2 * cpucores will not be enough. You need to
experiment with various setups and be prepared to constantly monitor
your apps. uwsgitop could be a great tool to find the best values.

Django and Websockets: How to create, in the same process, an efficient project with both WSGI and websockets?

I'm trying to do a Django application with an asynchronous part: Websockets. Just as a little challenge, I want to mount everything in the same process. Tried Socket.IO but couldn't manage to actually use sockets, instead of longpolling (which killed my browser several times, until I gave up).
What I then tried was a not-so-maintained library based on gevent-websocket. However, had many errors and was not easy to debug.
Now I am trying a Tornado approach but AFAIK (please correct me if I'm wrong) integrating async with a regular django app wrapped by WSGIContainer (websockets would go through Tornado, regular connections through Django) will be a true server killer if a resource is heavy or, somehow, the Django ORM goes slow into heavy operations.
I was thinking on moving to Twisted/Cyclone. Before I move from one architecture with such issue to ANOTHER architecture with such issue, i'd like to ask:
Does Tornado (and/or Twisted) have an architecture of scheduling tasks in the same way Gevent does? (this means: when certain greenlets "block", they schedule themselves to other threads, at least until the operation finishes). I'm asking this because (please correct me if I'm wrong) a regular django view will not be suitable for stuff like #inlineCallbacks, and will cause the whole server to be blocked (incl. the websockets).
I'm new to async programming in python, so there's a huge change I have misinformation about more than one concept. Please help me clarifying this before I switch.
Neither Tornado nor Twisted have anything like gevent's magic to run (some) blocking code with the performance characteristics of asynchronous code. Idiomatic use of either Tornado or Twisted will be visible throughout your app in the form of callbacks and/or Futures/Deferreds.
In general, since you'll need to run multiple python processes anyway due to the GIL, it's usually best to dedicate some processes to websockets with Tornado/Twisted and other processes to Django with the WSGI container of your choice (and then put nginx or haproxy in front so it looks like a single service to the outside world).
If you still want to combine django and an asynchronous service in the same process, the next best solution is to use threads. If you want the two to share one listening port, the listener must be a websocket-aware HTTP server that can spawn other threads for WSGI requests. Tornado does not yet have a solution for this, although one is planned for version 4.1 (https://github.com/tornadoweb/tornado/pull/1075). I believe Twisted's WSGI container does support running the WSGI workers in threads, but I don't have any experience with it myself. If you need them in the same process but do not need to share the same port, then you can simply run the IOLoop or Reactor in one thread and the WSGI container of your choice in another (with its associated worker threads).

Handle computationally-intensive requests to a Django web application, possibly using a pre-forking RPC server

I am running a Django-based webservice with Gunicorn behind nginx as a reverse proxy.
My webservice provides a Django view which performs calculations using an external instance of MATLAB. Because the MATLAB startup takes some seconds on its own, even requests incurring only very simple MATLAB calculations require this amount of time to be answered.
Moreover, due to the MATLAB sandboxing done in my code, it is important that only one MATLAB instance is run at the same time for a webserver process. (Therefore, currently I am using the Gunicorn sync worker model at the moment which implements a pre-forking webserver but does not utilize any multithreading.)
To improve user experience, I now want to eliminate the waiting time for MATLAB startup by keeping some (e.g. 3-5) "ready" MATLAB instances running and using them as requests come in. After a request has been serviced, the MATLAB process would be terminated and a new one would be started immediately, to be ready for another request.
I have been evaluationg two ways to do this:
Continue using Gunicorn sync worker model and keep one MATLAB instance per webserver process.
The problem with this seems to be that incoming requests are not distributed to the webserver worker processes in a round-robin fashion. Therefore, it could happen that all computationally-intensive requests hit the same process and the users still have to wait because that single MATLAB instance cannot be restarted as fast as necessary.
Outsource the MATLAB computation to a backend server which does the actual work and is queried by the webserver processes via RPC.
In my conception, there would be a number of RPC server processes running, each hosting a running MATLAB process. After a request has been processed, the MATLAB process would be restarted. Because the RPC server processes are queried round-robin, a user would never have to wait for MATLAB to start (except when there are too many requests overall, but that is inevitable).
Because of the issues described with the first approach, I think the RPC server (approach 2) would be the better solution to my problem.
I have already looked at some RPC solutions for Python (especially Pyro and RPyC), however I cannot find an implementation that uses a pre-forking server model for the RPC server. Remember, due to the sandbox, multithreading is not possible and if the server only forks after a connection has been accepted, I would still need to start MATLAB after that which would thwart the whole idea.
Does anybody know a better solution to my problem? Or is the RPC server actually the best solution? But then I would need a pre-forking RPC server (= fork some processes and let them all spin on accept() on the same socket) or at least a RPC framework that can be easily modified (monkey-patched?) to be pre-forking.
Thanks in advance.
I have solved the problem by making my sandbox threadsafe. Now I can use any single-process webserver and use a Queue to get spare MATLAB instances that are spawned in a helper thread.

How do I handle blocking IO in mod_wsgi/django?

I am running Django under Apache+mod_wsgi in daemon mode with the following config:
WSGIDaemonProcess myserver processes=2 threads=15
My application does some IO on the backend, which could take several seconds.
def my_django_view:
content=... # Do some processing on backend file
return HttpResponse(content)
It appears that if I am processing more than 2 http requests that are handling this kind of IO, Django will simply block until one of the previous requests completes.
Is this expected behavior? Shouldn't threading help alleviate this i.e. shouldn't I be able to process up to 15 separate requests for a given WSGI process, before I see this kind of wait?
Or am I missing something here?
If the processing is in python, then Global Interpreter Lock is not being released -- in a single python process only one thread of python code can be executing at a time. The GIL is usually released inside C code though -- like most I/O, for example.
If this kind of processing is going to happen a lot, you might consider running a second "worker" application as a deamon, reading tasks from the database, performing the operations and writing resulsts back to the database. Apache might decide to kill processes that take too long to respond.
+1 to Radomir Dopieralski's answer.
If the task takes long you should delegate it to a process outside the request-response cycle, either by using a standard cron, or some distributed task queue like Celery
Databases for workload offloading were quite the thing in 2010, and a good idea then, but we've come a bit farther now.
We're using Apache Kafka as a queue to store our in-flight workload. So, Dataflow is now:
User -> Apache httpd -> Kafka -> python daemon processor
User post operation puts data into system to be processed via wsgi app that just writes it very fast to a Kafka queue. Minimal sanity checking is done in the post operation to keep it fast but find some obvious problems. Kafka stores the data very fast so the http response is zippy.
A separate set of python daemons pull data from Kafka and do processing on it. We actually have multiple processes that need to process it differently, but Kafka makes that fast by only writing once and having multiple readers read the same data if needed; no penalty for duplicate storage is incurred.
This allows very, very fast turnaround; optimal resource usage since we have other boxes offline handle the pull-from-kafka and can tune that to reduce lag as needed. Kafka is HA with same data written to multiple boxes in the cluster so my manager doesn't complain about 'what happens if' scenarios.
We're quite happy with Kafka. http://kafka.apache.org