I would like to drop any request which takes longer than x seconds, and attend to the new request on Jetty.
I thought this should be a very usual thing that people do, but apparently not. I cannot find any documentation on doing anything similar to this.
I configured thread_pool idleTimeout, which understandably doesn't have any effect on this.
I also configured connector idle timeout. I have an endpoint which simply sleeps the thread. I thought that the idle timeout should kill this thread, as it is obviously idle, but that didn't happen either.
I was wondering what is the proper way of handling these scenarios.
Related
I have been using flask, and some of my route handlers start computations that can take several minutes to complete. Using flask's development server, I can use app.run(threaded=True) and my server will continue to respond to other requests while it's off performing these multi-minute computation.
Now I've starting using Flask-SocketIO and I'm not sure how to do the equivalent thing. I understand that I can explicitly spawn a separate thread in python any time it starts one of these computations. Is that the only way to do it? Or is there something equivalent to threaded=True for flask-socketio. (Or, more likely, am I just utterly confused.)
Thanks for any help.
The idea of the threaded mode in Flask/Werkzeug is to enable the development server to handle multiple requests concurrently. In the default mode, the server can handle one request at a time, if a client sends a request while the server is already processing a previous request, then the second request has to wait until that first request is complete. In threaded mode, Werkzeug spawns a thread for each incoming request, so multiple requests are handled concurrently. You obviously are taking advantage of the threaded mode to have requests that take very long to return, while keeping the server responsive to other requests.
Note that this approach is hard to scale properly when you move out of the development web server and into a production web server. For a worker based server you have to pick a fixed number of workers, and that gives you the maximum number of concurrent requests you can have.
The alternative approach is to use a coroutine based server, such as gevent, which is fully supported by Flask. For gevent there is a single worker process, but in it there are multiple lightweight (or "green") threads, that cooperatively allow each other to run. The key to make things work under this model is to ensure that these green threads do not abuse the CPU time they get, because only one can run at a time. When this is done right, the server can scale much better than with the multiple worker approach I described above, and you can easily have hundreds/thousands of clients handled in this fashion.
So now you want to use Flask-SocketIO, and this extension requires the use of gevent. In case the reason for this requirement isn't clear, unlike HTTP requests, SocketIO uses the WebSocket protocol, which requires long-lived connections. Using gevent and green threads makes it possible to have a potentially large number of constantly connected clients, something that would be impossible to do with multiple workers.
The problem is your long calculation, which is not friendly to the gevent type of server. To make it work, you need to ensure your calculation function yields often, so that other threads get a chance to run and don't starve. For example, if your calculation function has a loop in in, you can do something like this:
def my_long_calculation():
while some_condition:
# do some work here
# let other threads run
gevent.sleep()
The sleep() function will basically halt your thread and switch to any other threads that need CPU. Eventually control will be given back to your function, and at that point it'll move on to the next iteration. You need to make sure the sleep calls are not too spaced out (as that will make the rest of the application unresponsive) or not too closer (as that may slow down your calculation).
So to answer your question, as long as you yield properly in your long calculation, you do not need to do anything special to handle concurrent requests, as this is the normal operating mode of gevent.
If for any reason the yield approach is not possible, then you may need to think about offloading the CPU intensive tasks to another process. Maybe use Celery to have these done as a job queue.
Sorry for the long winded answer. Hope this helps!
After reading through the ZMQ manual about the load balancing broker, I thought that it would be great to implement in my own code. So I did, adding some additional touches to make it more responsive. One performance enhancement I was looking to add was the ability to dispatch to multiple long-running work jobs concurrently. I think I'm right about this, I could be wrong though, so consider the following with respect to just the lbbroker code that's in the manual:
Two workers (clients) simultaneously request work, each with long running jobs given to them (by a manager, or manager). In the current code, It's good because it's not round-robin-ing the work to different recipients, it's selecting FCFS. But there's also a problem in that a reply is first needed from the first worker who gets through before work can be dispensed to the second worker.
Basically, I want to dole worker out as fast as there are workers ready to receive it, FCFS style and concurrently as well. At the same time, I don't want to lose the model that I have where manager A gets through to worker B, and worker B's reply gets back to manager A. Keeping this, which is facilitated by the request-reply pattern, while at the same time allowing worker B to receive the only manager's second work job while A may still be processing it's job is very desired.
How can I most easily go about achieving this? Preferably by modifying my current lbbroker implementation, which isn't too different from lbbroker in the manual.
Thanks in advance.
As it turns out, my difficulties stemmed from an unsufficiently specific understanding of the load balancing broker example; it is not a broker that has REP sockets in order that it must receive between each work request/worker request. So the asynchronous issue does not exist at all.
Basically, a Router has an identity message and in forwarding that along in a consistent manner, you can avoid the issue entirely, and the router is free to connect other manager worker pairs while N concurrent workers work.
When developing my web application using Django, I faced a problem, when I call some functions locally they work correctly, but once i call them over HTTP request they are not executed.
I asked around and i was told to execute them asynchronously outside the request response cycle using celery and a messaging queue server, it worked well, but still I don't understand why i have to execute some tasks asynchronously even when i don't have race condition and there's only one client calling the web service.
This is a big black spot for me because I make it work without really knowing how.
Can anyone explain it to me?
Thanks.
The two main benefits I know of for queue-based systems are:
One, a response can be given to the client without having to wait for work to be done. This lets pages load faster and clients spend less time waiting.
Second, a queue gives you a central location for scheduled jobs that multiple workers can draw from. If a certain component of your application can't keep up with the amount of work there is to do (or if it fails for some reason), you can have other instances of that component doing the work, and there is a single place where all of the work that needs to be done can be found.
I'm putting together a website that will track user-defined events with time limits. Every user would be free to create events, and when the time limit expired, the server would need to take some action based on the outcome of the event. The specific component I'm struggling with is the time-keeping: think like eBay's auction clock -- it's set to expire at a certain time, clearly runs server-side, and takes some action when the time runs out. Searches for a "server side timer," unfortunately, just bring back results for a timer that gets the time from the server instead of the client. :(
The most obvious solution is to run a script on the server, some program that would watch all the clocks and take action when any of them expired. Tragically, I'll be using free web hosting, and sincerely doubt that I'll be able to find someone who'll let me run arbitrary stuff on their servers.
The solutions that I've looked into:
Major concept option 1: persuade each user's browser to run the necessary timers (trivial javascript), and when the timers expire, take necessary action. The problem with this approach is obvious: there could be hundreds, if not thousands, of simultaneous expiring timers (they'll tend to expire in clusters), and the worst case is that every possible user could be viewing their timer expire. That's a server overload waiting to happen at the worst possible instant.
Major concept option 2: have one really trusted browser, say, a user logged in to the website as "cron" which could run all of the timers at once. The action would all happen in that browser's javascript, and would work great, as long as that browser never crashed, that machine never failed, and that internet connection never went down.
As you can see, I feel like I'm barking up the wrong forest on this problem. Some other ideas that have presented themselves:
AJAX: I'm not seeing anything here that will do quite what I need. It's all browser-run stuff, nothing like a server-side process that could run independent of the user's browser.
PHP: Runs neatly on the server, but only in response to client requests. I'm not seeing any clean way to make PHP fork off a process and run a timer independent of the user's browser.
JS: same problems as PHP, but easier to read. ;)
Ruby: There may be some multi-threading with Ruby, but it isn't readily apparent to me. Would it be possible to have each user's browser check to see if a timer process was running for their event, and spawn a new server-side ruby process if it wasn't?
I'm wide open for ideas -- I've started playing with concepts in JS and PHP, but I'm not tied to any language, particularly. The only constraint, really, is that I won't own the server that I'm running the site on, so I can't just run a neat little local process that does what I need it to do. :(
Any thoughts? Thanks in advance,
Dan
ASP.NET has multi-threading. You can have a static variable to collect the event data, and use a thread to do whatever needed when the time comes. After you can empty the static variable so it's ready for future use.
http://leedale.wordpress.com/2007/07/22/multithreading-with-aspnet-20/
You might want to take a look at the Quartz scheduler for Java which also has a .NET version. With a friendly open source license (Apache 2.0) this is probably a very good starting point.
If you can control cron jobs, which at least I could on HostPapa's shared hosting, you could run a php file every second which checks the timers and takes action based on them.
I would suggest AJAX anyway, what we did on a game server was emulation of "server connects to client" via AJAX request to server without any time-out (asynchronous connection). Basically you create one extra connection for each client that hangs on the server and waits for the server to take self-invoked action. After the action is done you start a new hanging connection immediately so you have one hanging all the time (so the server can talk to your client any time it wants). You can send javascript code from the server that will decide what will happen next. You can check clients to have these hanging connections on the server side to count as valid and of course run your timers on the server.
I'm trying to test out modes of failure for software that interacts with a web service, and I've already had reported issues where problems occur if the software doesn't get a timely response (e.g., it's waiting a minute or longer). I'd like to simulate this so that I can track down and fix issues myself, but unplugging the network connection doesn't do the trick, because it returns immediately with no route found.
What I'd like to know is, is there a simple way I can make a CGI script that accepts a connection but just sits there, keeping the connection alive for several minutes, without doing a while (true) {} type of loop?
How about letting the script sleep for some (very long) time?
I don't know what language you are using for your scripting, but in .net you could do something like Thread.Sleep(6000);
HTTP Fiddler is excellent for this sort of thing. You can simulate slow connections and, if you want, get it to "break" when a request comes in so you can similate a response that never returns.
Go get it from here...
http://www.fiddlertool.com/fiddler/
You will have to idle in some way since if your CGI script returns the connection will get closed.
If your network equipment supports throttling you might want to limit outgoing traffic to something ridiculously low.