throttling http api calls with delay - django

I'm trying to implement some throttles on our REST API. A typical approach is after a certain threshold to block the request (with 403 or 429 response). However, I've seen one api that adds a delay to the response instead.
As you make calls to the API, we will be looking at your average calls per second (c/s) over the previous five-minute period. Here's what will happen:
over 3c/s and we add a 2 second delay
over 5c/s and we add a 4 second delay
over 7c/s and we add a 5 second delay
From the client's perspective, I see this being better than getting back an error. The worst that can happen is that you'll slow down.
I am wondering how this can be achieved without negatively impacting the app server. i.e. To add those delays, the server needs to keep the request open, causing it to keep more and more request processors busy, meaning it has less capacity for new requests coming in.
What's the best way to accomplish this? (i.e. is this something that can be done on the web server / load balancer so that the application server is not negatively affected? Is there some kind of a throttling layer that can be added for this purpose?)
We're using Django/Tastypie, but the question is more on the architecture/conceptual level.

If your are using synchronous application server which is the most common setup for Django applications (for example a gunicorn with default --worker-class sync), then adding such a delay in the application would indeed have a very bad impact on performance. A worker handling a delayed request would be blocked during a delay period.
But you can use asynchronous application server (for example gunicorn with '--worker-class gevent`) and then an overhead should be negligible. A worker that handles a delayed requests is able to handle other requests while a delay is in progress.
Doing this in the reverse proxy server may be a better option, because it allows to easily and flexibly adjust a policy. There is an external nginx module for exactly such thing.

Related

How do i bind a react component to the database state?

I want to bind the state of a react component to a backend server running on Django. What i am trying to achieve is an ajax call that constantly updates the state of the component, however, constantly making an api call after every couple of seconds might have an impact on the performance of the application and hence, i am looking for a better way to achieve the same.
You've got two possible solutions -- polling and websockets. Polling means making a call at a specified interval asking what's changed. That's easy, but as you said, can be inefficient. Websockets involve the server sending push notifications to the browser saying when something changes.
This site gives an introduction on how to use websockets for what you want. It's better, but more complicated to implement.
You'll have to make a judgment call about traffic -- if your traffic could handle a polling call every five seconds from every user, that's the much easier option. If not, websockets it is.

threading=True with flask-socketio

I have been using flask, and some of my route handlers start computations that can take several minutes to complete. Using flask's development server, I can use app.run(threaded=True) and my server will continue to respond to other requests while it's off performing these multi-minute computation.
Now I've starting using Flask-SocketIO and I'm not sure how to do the equivalent thing. I understand that I can explicitly spawn a separate thread in python any time it starts one of these computations. Is that the only way to do it? Or is there something equivalent to threaded=True for flask-socketio. (Or, more likely, am I just utterly confused.)
Thanks for any help.
The idea of the threaded mode in Flask/Werkzeug is to enable the development server to handle multiple requests concurrently. In the default mode, the server can handle one request at a time, if a client sends a request while the server is already processing a previous request, then the second request has to wait until that first request is complete. In threaded mode, Werkzeug spawns a thread for each incoming request, so multiple requests are handled concurrently. You obviously are taking advantage of the threaded mode to have requests that take very long to return, while keeping the server responsive to other requests.
Note that this approach is hard to scale properly when you move out of the development web server and into a production web server. For a worker based server you have to pick a fixed number of workers, and that gives you the maximum number of concurrent requests you can have.
The alternative approach is to use a coroutine based server, such as gevent, which is fully supported by Flask. For gevent there is a single worker process, but in it there are multiple lightweight (or "green") threads, that cooperatively allow each other to run. The key to make things work under this model is to ensure that these green threads do not abuse the CPU time they get, because only one can run at a time. When this is done right, the server can scale much better than with the multiple worker approach I described above, and you can easily have hundreds/thousands of clients handled in this fashion.
So now you want to use Flask-SocketIO, and this extension requires the use of gevent. In case the reason for this requirement isn't clear, unlike HTTP requests, SocketIO uses the WebSocket protocol, which requires long-lived connections. Using gevent and green threads makes it possible to have a potentially large number of constantly connected clients, something that would be impossible to do with multiple workers.
The problem is your long calculation, which is not friendly to the gevent type of server. To make it work, you need to ensure your calculation function yields often, so that other threads get a chance to run and don't starve. For example, if your calculation function has a loop in in, you can do something like this:
def my_long_calculation():
while some_condition:
# do some work here
# let other threads run
gevent.sleep()
The sleep() function will basically halt your thread and switch to any other threads that need CPU. Eventually control will be given back to your function, and at that point it'll move on to the next iteration. You need to make sure the sleep calls are not too spaced out (as that will make the rest of the application unresponsive) or not too closer (as that may slow down your calculation).
So to answer your question, as long as you yield properly in your long calculation, you do not need to do anything special to handle concurrent requests, as this is the normal operating mode of gevent.
If for any reason the yield approach is not possible, then you may need to think about offloading the CPU intensive tasks to another process. Maybe use Celery to have these done as a job queue.
Sorry for the long winded answer. Hope this helps!

cpu load and django application that makes long-response-time requests to external API

I'm developing a web application in python for which each user request makes an API call to an external service and takes about 20 seconds to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes.
The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM.
The web app is developed in python which is served through Apache with mod_wsgi
My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ?
This really doesn't have anything to do with blocking; it does, but it doesn't. The 20 sec request is blocking one thread, so another has be utilized for the next request. Whereas with quick requests, the threads basically round-robin.
However, this really shouldn't be spiking your CPU output. Webservers have an upward limit of "workers" that get spawned and when they're all tied up, they're all tied up. It won't extend past the limit, so unless you've set or the default setting is higher than the box you have is capable of running, it shouldn't push your CPU that high.
Regardless, all that is merely informational, and doesn't really solve your problem. With such a long running request though, you should be offloading this from your webserver as quick as possible. The webserver should merely hand off the request to another process that can asyncronously handle it and then employ polling to notify the client when the response is ready. Node.js is used a lot in similar scenarios, but I really don't have enough experience with it to give you any real guidance beyond that.
You should look into using message queues to offload tasks so that your user requests are not blocked.
You could look into python libs kombu and celery to handle messages and tasks.
You are likely using prefork MPM with Apache and mod_wsgi embedded mode. This is a bad combination by default because Apache is setup for PHP and not fat Python web applications. Read:
http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usage.html
which explains this exact sort of issue.
Use mod_wsgi daemon mode at the minimum, and preferably also change to worker MPM for Apache.

HTTP push examples in Flex

I am trying to create a simple board game (a kind of checkers), where users will be able to play online with each other using flex application as a client.
I am using django application to process the game on the server side. And I come across the problem, if one user made a move, I can send it to a server, but how do I let the opponent know about it?
The way I am thinking to do it is to create a timer and send requests to the server asking was opponents move done or not....But here we have 2 limitations:
1) Each client would produce big amount of requests (not sure how server will work if I have e.g 100 such clients)
2) If players will chose game with a time limit for example 5 minutes/per game it will be very important to show them situation on the board as soon as it changes (without a pause), but timer will send request only on timer event, so if for example I will chose tick interval to 5 seconds it will mean that 5 seconds another side will not be aware of the situation changes.
Think of it this way. If you poll every 1 or 2 seconds, that should be quick enough not to be noticed by either client. A simple REST request checking for changes is bloody quick and a modern web servers should be able to handle 100 such requests without issue.
Implement it with the timer now, run some performance tests and worry about servers after you're done.
If you are worried later, you can always have graduated timers. e.g., check after 100ms, 200ms, 400ms, 800ms, 1600ms, etc... with a cap at 5 seconds or something.
Take a look at this code for some ideas maybe, since chat uses similar concepts: http://anantgarg.com/2009/05/13/gmail-facebook-style-jquery-chat/
One way is to use a TCP Socket from the client to connect back to your server. Have the client listen for data, and have the server send updates whenever needed. This may require firewall changes (to allow the port you'll be using) and a server which accepts multiple persistent client connections. This may only work for a fixed smallish number of clients, since if you are keeping multiple connections open it will incur some server overhead.
If you have firewall restrictions and need to use HTTP ports, you can investigate Comet implementations. What I proposed in the first paragraph is more or less the same thing - Comet just does it over HTTP and standardises some aspects of the communication.

Forcing asmx web service to handle requests one at a time

I am debugging an ASMX web service that receives "bursts" of requests. i.e., it is likely that the web service will receive 100 asynchronous requests within about 1 or 2 seconds. Each request seems to take about a second to process (this is expected and I'm OK with this performance). What is important however, is that each request is dealt with sequentially and no parallel processing takes places. I do not want any concurrent request processing due to the external components called by the web service. Is there any way I can force the web service to only handle each response sequentially?
I have seen the maxconnection attribute in the machine.config but this seems to only work for outbound connections, where as I wish to throttle the incoming connections.
Please note that refactoring into WCF is not an option at this point in time.
We are usinng IIS6 on Win2003.
What I've done in the past is to simply put a lock statement around any access to the external resource I was using. In my case, it was a piece of unmanaged code that claimed to be thread-safe, but which in fact would trash the C runtime library heap if accessed from more than one thread at a time.
Perhaps you should be queuing the requests up internally and processing them one by one?
It may cause the clients to poll for results (if they even need them), but you'd get the sequential pipeline you wanted...
In IIS7 you can set up a limit of connections allowed to a web site. Can you use IIS7?