We would like to run our cms::MessageConsumer and cms::MessageProducer on different threads of the same process.
How do we do this safely?
Would having two cms::Connection objects and two cms::Session objects, one each for consumer and producer, be sufficient to guarantee safety? Is this necessary?
Is there shared state between objects at the static library level that would prevent this type of usage?
You should read the JMS v1.1 specification, it calls out clearly which objects are valid to use in multiple threads and which are not. Namely the Session, MessageConsumer and MessageProducer are considered unsafe to share amongst threads. We generally try to make them as thread safe as we can but there are certainly ways in which you can get yourself into trouble. Its generally a good idea to use a single session in each thread and in general its a good idea to use a session for each MessageConsumer / MessageProducer since the Session contains a single dispatch thread which means that a session with many consumers must share its dispatch thread for sending messages on to each consumer which can lower latency depending on the scenario.
I'm answering my own question to supplement Tim Bish's answer, which I am accepting as having provided the essential pieces of information.
From http://activemq.apache.org/cms/cms-api-overview.html
What is CMS?
The CMS API is a C++ corollary to the JMS API in Java which is used to
send and receive messages from clients spread out across a network or
located on the same machine. In CMS we've made every attempt to
maintain as much parity with the JMS api as possible, diverging only
when a JMS feature depended strongly on features in the Java
programming language itself. Even though there are some differences
most are quite minor and for the most part CMS adheres to the JMS
spec, so having a firm grasp on how JMS works should make using CMS
that much easier.
What does the JMS spec say about thread safety?
Download spec here:
http://download.oracle.com/otndocs/jcp/7195-jms-1.1-fr-spec-oth-JSpec/
2.8 Multithreading JMS could have required that all its objects support concurrent use. Since support for concurrent access typically
adds some overhead and complexity, the JMS design restricts its
requirement for concurrent access to those objects that would
naturally be shared by a multithreaded client. The remainder are
designed to be accessed by one logical thread of control at a time.
JMS defines some specific rules that restrict the concurrent use of
Sessions. Since they require more knowledge of JMS specifics than we
have presented at
Table 2-2 JMS Objects that Support Concurrent Use
Destination: YES
ConnectionFactory: YES
Connection: YES
Session: NO
MessageProducer: NO
MessageConsumer: NO
this point, they will be described later. Here we will describe the
rationale for imposing them.
There are two reasons for restricting concurrent access to Sessions.
First, Sessions are the JMS entity that supports transactions. It is
very difficult to implement transactions that are multithreaded.
Second, Sessions support asynchronous message consumption. It is
important that JMS not require that client code used for asynchronous
message consumption be capable of handling multiple, concurrent
messages. In addition, if a Session has been set up with multiple,
asynchronous consumers, it is important that the client is not forced
to handle the case where these separate consumers are concurrently
executing. These restrictions make JMS easier to use for typical
clients. More sophisticated clients can get the concurrency they
desire by using multiple sessions.
As far as I know from the Java side, the connection is thread safe (and rather expensive to create) but Session and messageProducer are not thread safe. Therefore it seems you should create a Session for each of your threads.
Related
in our application, if user logged in as admin he can do any operation. supposed one admin modifying a route,if second admin at the same time checked the same route and creating an airwaybill for the same route.This will be a problem. I could not find how my application is handling these concurrent requests.
(we are simply using jdbc transactions)
I am getting different answers from my team 1. web/application server handles these transactions and it will handle concurrent requests without any issues.
2. locks will be happened on rows on database and there wont be any problem for concurrent requests.
bottomline : concurrent requests should be handled in code? or we shall do any setting in web/application for concurrent requests while deploying? or by default database will handle concurrent requests by row locking mechanism?
if anyone knows where to find the solution , please let me know.
As far as I'm aware most database engines use some kind of locking during queries, but it will differ depending on the engine. I know that InnoDB enforces transaction atomicity (see this stack exchange thread) so anything that is wrappend in a transaction wont be interfered with mid execution. However there is no guaruntee as to which request will reach the datbase first.
As for the webServer/appServer, assuming your using a threaded web server: apache tomcat, jetty etc then each request is handled as by a seperate thread so I would assume is no inherent thread saftey. In the majority of cases the database will handle your concurrency without complaining, however i would recommend including a some kind of serialisation on the application end in case you decide to change DB implementation somewhere down the road, you will also have more control over how requests are handled.
In short ..... do both.
As far as I know, most databases has some kind of locking during the transactions and queries, but you should check the database references to ensure the type of locking method it uses. As for your problem with your web server, I know that tomcat handles requests concurrently and offers some kind of thread safety for it's own resources, but it offers no thread safety for your application.Thus, you should do it on your own. For the problem you mentioned above, I think when you are accessing your route, you should query it against the database whether it exists or not. Also when your other admin is modifying the route, you can use some sort of lock on the block you are doing that so that when the other admin at the same time wants to access the route that is being modified, he waits for the transaction to be completed. if you are using java for the server side, I recommend to see java synchronization methods or if another language, check locking and thread safety methods for that language.
I need to work with a REST service (build with an JAX-RS implementation) in an heterogeneous environment, so I wondered how the abstractions of programming languages are converted to the real restful endpoints. I think most aspects are clear, but when it comes to asynchronous communications in REST I know several possibilities: keeping the connection open, returning a resource that can constantly be queried, chunked messages or the client transmits a callback resource.
My approach was to read the JAX-RS 2.0 Specification, but I think there is actually little stated about the REST implementation of asynchronous requests. Then I read the Jersey documentation and came to the conclusion that the JAX-RS implementations just keep the connection open for as long as the processing needs. So with "asynchronous" JAX-RS just refers to the blocking of methods on the server/client side and does not use any special behavior of REST. My first question: Is my analysis correct?
If this is the case, I have two new questions:
Is this really compliant to the REST paradigm in respect to the stateless constraint?
Considering the long-running processes that maybe work for days, is an open connection eventually automatically closed (e.g. by the OS or by a TCP timer)?
Thanks in advance!
REST architecture has got nothing to do with asynchronous programming paradigms IMO. Asynchronous implementation using #Suspended and AsynResponse interface in JAX-RS involves suspending the thread which initiated the request
To answer your questions
'So with "asynchronous" JAX-RS just refers to the blocking of methods on the server/client side and does not use any special behavior of REST'
-> REST has got nothing to do with async design in JAX-RS, but the way you design that Resource class and the setup the async method should involve RESTful principles.
Also, there is no 'blocking' as such - in fact its exactly the opposite. The I/O thread on server end is immediately suspended and returned to the container. The actual processing might still take a long time, but the real goal was to 'not block' and occupy threads. A Web container has limited number of threads dedicated to serving input requests. Prospective clients will get blocked if ALL the container threads are busy processing other clients. This is avoided by JAX-RS because it suspends the thread, returns it to the web container and responds on a different thread (internal server thread). All this increases the overall responsiveness of the application
'Considering the long-running processes that maybe work for days, is an open connection eventually automatically closed (e.g. by the OS or by a TCP timer)?'
--> Not sure what would happen in case this happens. But its not necessary to have your clients waiting 'forever' - you can specify timeouts using the TimeoutHandler (guess you might have already read this)
Just my two cents!
I'm writing a tcp server for an online turn-based game. I've already written a prototype using php sockets, but would like to move to C++. I've been looking at the popular network libraries (ASIO, ACE, POCO, LibEvent), but currently unclear which one would best suit my needs:
1) Connections are persistent (on the order of minutes), and the server must be able to handle 100+ simultaneous connections.
2) Connections must be able to maintain state information (user login info). [my php prototype currently requires each client request to contain the login info]
3) Optionally and preferably multi-threaded, but a single process. Prefer not to have 1 thread per connection, but a fixed number of threads working on all open connections.
I'm leaning towards POCO's TCPServer or Reactor frameworks, but not exactly sure if they meet my requirements. I think the Reactor is single threaded, and the TCPServer enforces 1:1 threading/connection. Am I correct?
In either case case, I'm not exactly sure how to do the most important task of associating login info to a specific connection with connections coming and going at random.
Boost.Asio should meet your requirements. The reactor queue can be serviced by multiple threads. Using asynchronous methods will enable your design of a fixed number of threads servicing all connections.
The tutorials and examples are probably the best place to start if you are unfamiliar with the library.
You might also take a look at MUSCLE, a multi-user networking library and server I wrote with this sort of application in mind. It's BSD-licensed, handles hundreds of users, and includes a server-side database mechanism for storing and sharing any information you want the clients to know about each other. The server is single-threaded by default, but I haven't found that to be a problem in practice (and it's possible to extend the server to be multithreaded if that turns out to be necessary).
My customer did not gave me details regarding the nature of it's application. It might
be multithreaded it might be not. His server serves SOAP messages (http requests)
Is there any special trick in order to understand if the peer is single or multi threaded?
I don't want to ask the customer and I don't have access to his server/machine. I want to find it myself.
It's irrelevant. Why do you feel it matters to you?
A more useful question would be:
Can the server accept multiple
simultaneous sessions?
The answer is likely to be 'yes, of course' but it's certainly possible to implement a server that's incapable of supporting multiple sessions.
Just because a server supports multiple sessions, it doesn't mean that it's multi-threaded. And, just because it's multi-threaded doesn't mean it will have good performance. When servers need to support many hundreds or thousands of sessions, multi-threading may be a very poor choice for performance.
Are you asking this question because you want to 'overlap' SOAP messages on the same connection - in other words, have three threads send requests, and then all three wait for a response? That won't work, because (like HTTP) request and response messages are paired together on each connection. You would need to open three connections in order to have three overlapped messages.
Unfortunately, no, at least not without accessing the computer directly. Multiple connections can even be managed by a single thread, however the good news is that this is highly unlikely. Most servers use thread pooling and assign a thread to a connection upon a handshake. Is there a particular reason why you need to know? If you're presumably going to work on this server, you'll know first-hand how it works.
It doesn't matter if the server is multithreaded or not. There are good and efficient ways to implement I/O multiplexing without threads [like select(2) and suchlike], if that's what worries you.
I want to know the technical reasons why the lift webframework has high performance and scalability? I know it uses scala, which has an actor library, but according to the install instructions it default configuration is with jetty. So does it use the actor library to scale?
Now is the scalability built right out of the box. Just add additional servers and nodes and it will automatically scale, is that how it works? Can it handle 500000+ concurrent connections with supporting servers.
I am trying to create a web services framework for the enterprise level, that can beat what is out there and is easy to scale, configurable, and maintainable. My definition of scaling is just adding more servers and you should be able to accommodate the extra load.
Thanks
Lift's approach to scalability is within a single machine. Scaling across machines is a larger, tougher topic. The short answer there is: Scala and Lift don't do anything to either help or hinder horizontal scaling.
As far as actors within a single machine, Lift achieves better scalability because a single instance can handle more concurrent requests than most other servers. To explain, I first have to point out the flaws in the classic thread-per-request handling model. Bear with me, this is going to require some explanation.
A typical framework uses a thread to service a page request. When the client connects, the framework assigns a thread out of a pool. That thread then does three things: it reads the request from a socket; it does some computation (potentially involving I/O to the database); and it sends a response out on the socket. At pretty much every step, the thread will end up blocking for some time. When reading the request, it can block while waiting for the network. When doing the computation, it can block on disk or network I/O. It can also block while waiting for the database. Finally, while sending the response, it can block if the client receives data slowly and TCP windows get filled up. Overall, the thread might spend 30 - 90% of it's time blocked. It spends 100% of its time, however, on that one request.
A JVM can only support so many threads before it really slows down. Thread scheduling, contention for shared-memory entities (like connection pools and monitors), and native OS limits all impose restrictions on how many threads a JVM can create.
Well, if the JVM is limited in its maximum number of threads, and the number of threads determines how many concurrent requests a server can handle, then the number of concurrent requests will be determined by the number of threads.
(There are other issues that can impose lower limits---GC thrashing, for example. Threads are a fundamental limiting factor, but not the only one!)
Lift decouples thread from requests. In Lift, a request does not tie up a thread. Rather, a thread does an action (like reading the request), then sends a message to an actor. Actors are an important part of the story, because they are scheduled via "lightweight" threads. A pool of threads gets used to process messages within actors. It's important to avoid blocking operations inside of actors, so these threads get returned to the pool rapidly. (Note that this pool isn't visible to the application, it's part of Scala's support for actors.) A request that's currently blocked on database or disk I/O, for example, doesn't keep a request-handling thread occupied. The request handling thread is available, almost immediately, to receive more connections.
This method for decoupling requests from threads allows a Lift server to have many more concurrent requests than a thread-per-request server. (I'd also like to point out that the Grizzly library supports a similar approach without actors.) More concurrent requests means that a single Lift server can support more users than a regular Java EE server.
at mtnyguard
"Scala and Lift don't do anything to either help or hinder horizontal scaling"
Ain't quite right. Lift is highly statefull framework. For example if a user requests a form, then he can only post the request to the same machine where the form came from, because the form processeing action is saved in the server state.
And this is actualy a thing which hinders scalability in a way, because this behaviour is inconistent to the shared nothing architecture.
No doubt that lift is highly performant but perfomance and scalability are two different things. So if you want to scale horizontaly with lift you have to define sticky sessions on the loadbalancer which will redirect a user during a session to the same machine.
Jetty maybe the point of entry, but the actor ends up servicing the request, I suggest having a look at the twitter-esque example, 'skitter' to see how you would be able to create a very scalable service. IIRC, this is one of the things that made the twitter people take notice.
I really like #dre's reply as he correctly states the statefulness of lift being a potential problem for horizontal scalability.
The problem -
Instead of me describing the whole thing again, check out the discussion (Not the content) on this post. http://javasmith.blogspot.com/2010/02/automagically-cluster-web-sessions-in.html
Solution would be as #dre said sticky session configuration on load balancer on the front and adding more instances. But since request handling in lift is done in thread + actor combination you can expect one instance handle more requests than normal frameworks. This would give an edge over having sticky sessions in other frameworks. i.e. Individual instance's capacity to process more may help you to scale
you have Akka lift integration which would be another advantage in this.