I have been working with spring web applications using jetty/tomcat app server for around two years now, however the thing that eludes me still is how are multiple requests handled in these servers. I understand that spring is helpful in making singletons, but my understanding is just limited to that.
Can someone point to any good resource that can help me understand how multiple requests are handled.
This can be answered at so many levels I have been staring at it for two days trying to figure out how answer it...so I'll take a kinda high level shot at it.
There is this server port that jetty listens on and some number of acceptor threads whose job it is to get connection objects made between the client and server side. Once you have that connection it flows through the jetty handler architecture doing things like authentication perhaps, or pulling off a session id and attaching a session object to the request. Then it works its way into the servlet handler and the appropriate servlet is found and you start dealing with the servlet-api. At that point you have a thread allocated to your request for all of the time you are in the servlet-api, at least under servlet 2.5. In servlet 3.0 you have some async mechanisms available to you, or you can use jetty-continuations as a way to get async support on servlet 2.5 api.
Anyway, there is a thread pool that the server uses to allocate threads to those connectors which ultimately are the threads spending all their time in the servlet-api. The jetty continuations api and the newer servlet 3.0 support provide mechanisms to release threads back to the primary threadpool so they can spend time on accepting and processing other requests.
There is obviously a lot more going on under the covered related to usage of the nio api's and how jetty efficiently manages all of this stuff, but maybe this is enough to sate your initial question. If not, feel free to peruse the jetty docs (http://www.eclipse.org/jetty/documentation/current) or look to the jetty mailing lists. There has been some discussion on jetty-9 optimizations as it relates to under the covers with http, spdy, and websocket connection handling and processing in the blogs at Webtide (http://webtide.com/blogs).
Related
I have already used Faye with Ruby On Rails, it's almost at 0 cost for me, because I'm running Faye over another server connected to my Rails App.
However I have faced some problems like when a query takes too long on the Rails server, after a while the Faye Connection would fail and raise an exception.
Now what I'm looking into the Actioncontroller::Live , most of the implementations are using Redis, which will be a bit costy for my startup, however I realized I can't do subscribe/publish style things with the Actioncontroller::Live.
My question: should I move over to Actioncontroller::Live or stick to Faye ? While these are the things that I want to accomplish:
Updates after commenting/feed
Notification system, based on pub/sub, similar to Faye.
Exception handling
Scalability > More users more connections
I know that Faye uses Bayeux vs ActionController::live uses SSE/ HTTP.
Should I consider anything related to Socket.IO ? SockJS ?
I have already read through some of the question about this topic on here like:
Replace Faye with rails 4 server side events? Faye VS rails 4 streaming?
But I need more info:
Here's some notes on why I would stick with Faye, which might bring you closer to an answer on this question:
Browser compatibility
As you read in the related stackoverflow question, Faye has better browser compatibility.
Stability
Rails::Live functionality doesn't seem to be very stable yet. There's currently active development on Rails SSE. As an example, it's quite unlikely that you won't be affected by this issue.
Threading & blocking vs asynchronous non-blocking
Do you use multi-threading in your application? If you don't, I definitely wouldn't introduce it just for Rails::Live as it opens up the possibility of non-threadsafe gem issues & limitations of server choices.
If you do have multi-threading, each client will keep a thread open to your application. If you run out of threads your application will be unresponsive/dead. Consider how many threads you need to cater for peak times with users having multiple browser tabs open, or even DOS attacks where someone opens up a huge amount of idle SSE/websocket connections to reach your max and take your app down. If you set a high amount of max threads to support many idle connections, you open up the possibility of having that many non-idle threads which could have it's own problems. No SSE/websockets and no comet/long polling is much safer for blocking apps. From what I understand, your setup runs Faye separately. The Faye server runs Ruby EventMachine or Node.js which are both asynchronous non-blocking and do not use a thread for each open connection. It can handle a huge amount of concurrent connections without problems.
My opinion is that a normal blocking Rails web application with a separate asynchronous non-blocking server for connections that stay open (to pass messages & make the app live) is the best of both setup. This is what you have with Rails + Faye.
Update: Actioncable was announced at Railsconf 2015. It runs non-blocking as described above, but it's an integrated official Rails solution. Having a single framework with a massive community, an integrated non-blocking connection handler for websockets that you can run and configure separately while everything works "out of the box" is a big advantage of Rails.
From Action Cable readme:
Action Cable is powered by a combination of EventMachine and threads. The framework plumbing needed for connection handling is handled in the EventMachine loop, but the actual channel, user-specified, work is handled in a normal Ruby thread. This means you can use all your regular Rails models with no problem, as long as you haven't committed any thread-safety sins.
To learn more you can read up on ActionCable & Underlying architecture.
I tried scouring the net and 90% of times came across pages detailing "HOW" to use Apache to implement the reverse proxy.
I am thinking how exactly the reverse proxy plugins is coded?
I know they parse the request and see to which server it should be routed to.
Do they then create a thread for every connection from the end user and then delegate that thread the responsibility to connect to right server.
Keep on accepting more requests from other clients and creating similar threads.
When thread gets the response from server, reply with that to the client. And close the thread. Or do they have a thread pool?
I am thinking about it from C++ angle. If multithreading is used to increase the proxy's throughput.
A bit dated, but very much worth the read - http://www.kegel.com/c10k.html. After reading that you should have a good idea of why a thread per connection is a really bad idea. If you are really interesting in learning how scalable or high performance servers are implemented, I suggest digging in and reading some source code. I particularly like the source for Apache HTTPD.
I have an application running in a Java EE App Server and it needs to call a web service of a partner company.
Using wsimport.exe from my JDK (1.6) I have generated the client classes. I instantiate the service and get the port in order to call the web service.
I noticed that the first call to the web service is slow, and I am led to believe this is because it is validating the WSDL. Subsequent calls are fast.
I could keep the WSDL locally, and apparently that will speed up the first call.
In order to optimise my app, I was thinking I could create a pool of the clients. This has the added advantage that I have some throttling in the app - lets say I have a pool of 5 clients, then at most I will be using memory for 5 clients. If the load increased suddenly on my server, I don't have to worry that an unlimited number of clients would cause an out of memory error. I am assuming, based on past experience, that the web service clients use a lot of memory...
Would you bother with a pool?
How would you get over the first call to the web service being slow?
What is the best way to create that pool, so that I have to do the least amount of programming (i.e. I'd like to use a library / API / whatever, so that I don't have to reinvent the wheel and code some hairy bugs).
The Apache Commons Pool might be exactly what I am after.
It is configurable and seems to have thought of everything.
A colleague of mine suggested that you can use the #WebServiceRef annotation on a field in an EJB. The idea is that the server would inject a reference to a client, from which one can create a port for each thread that calls the EJB.
I assume that injected references come from a pool, although the specification doesn't appear to talk about this. The Javadoc for the annotation explicitly mentions that:
"the injected references are not thread safe"
AKKA with a master/slave setup as shown in the link could work well, albeit a little more complex than the Apache Commons Pool listed in another answer. AKKA also uses an execution pool, with its own threads, which isn't strictly allowed in the Java EE world, although I'd argue that because a well tested framework is in charge of the threads, there is no danger, and it shouldn't interfere with the app servers control of threads anyway as the number of threads being handled by AKKA is minimal.
I have read all the questions and answers I can find regarding Django and HTTP Push. Yet, none offer a clear, concise, beginning-to-end solution about how to accomplish a basic "hello world" of so-called "comet" functionality.
First question (1): To what extent is the problem that HTTP simply isn't (at least so far) made for this? Are all the potential solutions essentially hacks?
2) What's the best currently available solution?
Orbited?
Some other Twisted-based solution?
Tornado?
node.JS?
XMPP w/ BOSH?
Some other solution?
3) How does nginx push module play into this discussion?
4) Which of these solutions require replacement of the typical mod_wsgi / nginx (or apache) deployment model? Why do they require this? Is this a favorable transition in any case?
5) How significant are the advantages of using a solution that is already in Python?
Alex Gaynor's presentation from PyCon 2010, which I just watched on blip.tv, is amazing and informative, but not terrifically specific on the current state of HTTP Push in Django. One thing that he said that gave me some confidence was this: Orbited does a good job of abstracting and simulating the concept of network sockets. Thus, when WebSockets actually land, we'll be in a good place for a transition.
6) How does HTML5 Websockets differ from current solutions? Is Gaynor's assessment of the ease of transition from Orbited accurate?
I'd take a look at evserver (http://code.google.com/p/evserver/) if all you need is comet.
It "supports [the] little known Asynchronous WSGI extension" and is build around libevent. Works like a charm and supports django. The actual handler code is a bit ugly, but it scales well as it really is async io.
I have used evserver and I'm currently moving to cyclone (tornado on twisted) because I need a little more than evserver offsers. I need true bidirectional io (think socket.io (http://socket.io/)) and while evserver could support it I thought it was easier to reimplement tornado's socket.io in cyclone (I opted for cyclone instead of tornado as cyclone is build on twisted, thus allowing for more transports that aren't implemented in twisted (i.c. zeromq)) Socket.io supports websockets, comet style polling, and, much more interseting, flash based websockets. I think that in most practical situations websockets + flash based websockets are enough to support 99% (according to adobe flash penetration is about 99% (http://www.adobe.com/products/player_census/flashplayer/version_penetration.html)) of a websites visitors (only people not using flash need to fallback to one of socket.io its (less perfomant and resource hogging) backup transports)
Be aware though websockets are not an http transport thus putting them behind http based proxies (e.g haproxy in http mode) breaks the connection. Better serve them on an alternate ip or port so you can proxy in tcp mode (e.g haproxy in tcp mode).
To answer your questions:
(1) If you don't need a bidirectional transport longpolling based solutions are good enough (all they do is keep a connection open). Things do get iffy when you need your connection to be statefull or you need to be able to both send and receive data. In the latter case socket.io helps. However websockets are made for this scenario and with the support of flash its available to most of a websites vistors (via socket.io or standalone, however socket.io has the added benefit of backup transports for those people not wanting to install flash)
(2) if all you need is push, evserver is your best bet. It uses the the same javascripts on the client side as orbited. Else look at socket.io (this also needs a supporting server, the only python one available is tornado.)
(3) It's just one other server implementation. If i read it correctly it's push only. pushing data to a client is done by making http equest from your app to the nginx server. (nginx then takes care they reach the client). If you're inteersted in this, look at mongrel2 (http://mongrel2.org/home) it not only has handlers for longpolling but also for websockets.(instead of making http request to mongrel, this time you use zeromq handlers to get data to your mongrel server) (Do take note of the developer's lack of enthusiasm for websockets and flash based websockets. Especially taking into account that the websocket protocol tends to evolve you might, at some point, need to recode mongrel2's websocket support yourself keep having support for websockets)
(4) All solutions except evserver replace wsgi with something else. Though most servers also have some wsgi support ontop of this "something else". No matter what solution you choose be careful that one cpu intensive or otherwise io blocking request doesn't block the server. (you either need multiple instances or threads).
(5) Not very significant. All solutions depend on some custom handlers to push (and, if applicable, receive) data to the client. All solutions i mentioned allow these handlers to be written in python. If you want to use a completely different framework (node.js) then you have to weigh the ease of node.js (it's assumed to be easy, but it's also rather experimental, and i found very few libraries to be actually stable) against the convenience of using your existing code base and the available libraries (e.g. if your app needs a blog ther are plenty django blogs you could plug in, but none for node.js) Also don't stare yourself blind on performance stats. unless you plan to push dumb predefined data (what all benchmarks do) to the client you'll find that the actual processing of data adds much more overhead than even the worst async io implementation. (But you still want to use an async io based server if you plan to have many simultaneous clients, threading simply isn't meant to keep thousands of connections alive)
(6) websockets offer bidirectional communication, long polling/comet only pushes data but does not accept writes. (Socket.io simulates this bidirectional support by using two http requests, one to longpoll, one to send data. It tracks their interdependance by a (session) id that's part of both requests query string). flash based websockets are similar to real websockets (the difference is that their implementation is in the swf, not your browser). Also the websockets protocol does not follow the http protocol; longpolling/comet stuff does (technically the websocket client sends an upgrade request to websocket server, the upgraded protocol isn't http anymore)
There is support for WebSockets with django-websocket, but unfortunately there are major issues with it for getting it working; here's a quote from that page:
Disclaimer (what you should know when using django-websocket)
BIG FAT DISCLAIMER - right at the moment its technically NOT possible in any way to use a websocket with WSGI. This is a known issue but cannot be worked around in a clean way due to some design decision that were made while the WSGI stadard was written. At this time things like Websockets etc. didn't exist and were not predictable.
...
But not only WSGI is the limiting factor. Django itself was designed around a simple request to response scenario without Websockets in mind. This also means that providing a standard conform websocket implemention is not possible right now for django. However it works somehow in a not-so pretty way. So be aware that tcp sockets might get tortured while using django-websocket.
So at the moment, WSGI: no go; Django: hardly any go, even with django-websockets; see also a comment in the author's original announcement:
I can't say this looks like a good idea. You're doing long-lived connections in a way that is going to require threading. django-websocket requires threading setup, and won't work if you've got processes (because you'd just have too many processes) but threads won't scale for a lot of connections at the same time, either, so its just a false safety. You need an asynchronous platform for long-lived things, and I do this by doing my app in Django and my comet and websocket in Node.js
Personally if trying to use WebSockets (which I expect to be next year), I would try the combination of Twisted and Cyclone first. They're designed to cope with WebSockets, and scale well. If you write your code properly to remove unnecessary dependencies on Django, you should be able to use much of your code in a Twisted-based system. This is a very distinct advantage over using Node.js or Comet or any system in another language. You could also make a simple push
Finally, you could also just decide it's too hard and use an external service to provide the push support. That then becomes a matter of sending a simple JSON request to their servers instead of worrying about how to make the connection and how concurrency will work and things like that. Of course, you'll need to pay for it (though currently it may be free while in Beta), but you don't need to worry about implementation details; you won't have the full power of WebSockets that way though - just push support.
I can't believe it's been over six years since I asked this question.
Async with Django (and the associated network traffic, eg websockets) has been an itch for many of us in the community. I have taken these past few years, to among other things, scratch this itch.
hendrix
hendrix is a WSGI/ASGI conatiner that runs on Twisted. It has been a project mainly driven by 5 enthusiasts, with help and funding from some visionary organizations. It is in production today at dozens, but not hundreds, of companies.
I'll leave it to you to read the documentation to see why it's the best solution to this problem, but a few quick highlights:
it's based on Twisted, requires no knowledge or use of Twisted internals, but leaves them all available
It "just works" in the sense that you don't need any special server or process configuration to do async and socket traffic from within your Django (or Pyramid, or Flask) app
It is very likely to be forward-compatible with ASGI, the Django Channels standard, and is in some meaningful ways the first ASGI container
It ships with simple APIs that maintain the flow of your view logic and are easy to unit test.
Please see this talk that I gave at Django-NYC (at the Buzzfeed offices) for more information about why I think this is the best answer to this question.
Re question #2, I recently was given a tour of the internals of a Django app that uses Comet heavily, and Orbited was the solution they chose.
I'm developing a django-based MMO, and I'm wondering what would be the best way for server-client communication. The solutions I found are:
periodical AJAX calls
keeping a connection alive and sending data through it
Later edit:
This would consist in "you have a message", "user x attacked you", "your transport to x has arrived" and stuff like this. They could grow in number (something like 1/second), but for a typical user they shouldn't reach 1/minute
Not sure if it's applicable to what you're looking for, but there's a pretty good live example of lightweight server-client communication using node.js for a simple chat service:
http://chat.nodejs.org/
You might want to take a look at crossbar
Crossbar.io is an open-source server software that allows developers
to create distributed systems, composed of application components
which are loosely coupled, communicate in (soft) real-time and can be
implemented in different languages
There's also a third technique involving "hanging" queries:
Client requests an updated page (or whatever)
Server doesn't answer right away
Sometime before the request times out, there's a state update in the server, and the server finally answers the client, which can then update.
If there really is nothing new to tell the client within the update period, then the server responds before the timeout with a "no news" message, and the client starts up another "hanging" request.
Advantages:
Client doesn't have to do Ajax. You could even make regular HTML pages "interactive" like this.
Probably not quite as much senseless polling traffic.
Disadvantages:
Server needs to keep more active connections open, and service them at least once per timeout period. Also,
depending on how well the server code supports multi-threading (does PHP provide any help there?), it may be more difficult to code than AJAX response handling.