Asynchronous EJB scheduling - scheduling

I'm wondering how asynchronous EJB methods are scheduled onto the underlying plateform (SMP/NUMA plateform for example) ?
Can anyone describe the scheduling middleware (I'm not familiar with EJB).

EJB as a spec doesn't say how this should be exactly implemented, giving implementations the free hand to choose how to do this.
That said, the implementations I've seen simply use a thread pool. It functions pretty much like an executor service does in Java SE. A call to an #Asynchronous methods results in a task being put in a queue, which is serviced by said thread pool.
SMP/NUMA properties are not directly influenced by EJB, but depend on how the underlying operating system handles threads within a single process.

Related

What is the executor pattern in a C++ context?

The author of asio, Christopher Kohlhoff, is working on a library and proposal for executors in C++. His work so far includes this repo and docs. Unfortunately, the rationale portion has yet to be written. So far, the docs give a few examples of what the library does but I don't feel like I'm missing something. Somehow this is more than a family of fancy invoker functions.
Everything I can find on Google is very Java specific and a lot of it is particular to specific frameworks so I'm having trouble figuring out what this "executor pattern" is all about.
What are executors in this context? What do they do? What are the canonical examples of when they would be helpful? What variations exist among executors? What are the alternatives to executors and how do they compare? In particular, there seems to be a lot of overlap with an event loop where the events are initial input events, execution events, and a shutdown event.
When trying to figure out new abstractions I usually find understanding the motivation key. So for executors, what are we trying to abstract and why? What are we trying to make generic? Without executors, what extra work would we have to do?
The most basic benefit of executors is separating the definition of a program's parallelism from how it's used. Java's executor model exists because, by and large, you don't actually know, when you're first writing code, what parallelism model is best for your scenario. You might have little to gain from parallelism and shouldn't use threads at all, you might do best with a long running dedicated worker thread for each core, or a dynamically scaling pool of threads based on current load that cleans up threads after they've been idle a while to reduce memory usage, context switches, etc., or maybe just launching a thread for every task on demand, exiting when the task is done.
The key here is it's nigh impossible to know which approach is best when you're first writing code. You may know where parallelism might help you, but in traditional threading, you end up intermingling the parallelism "configuration" (when and whether to create threads) with the use of parallelism (determining which functions to call with what arguments). When you do mix the code like this, it's a royal pain to do performance testing of different options, because each and every thread launch is independent, and must be updated separately.
The main benefit of the executor model is that the parallelism configuration is done in one place (where the executor is created), and the users of that executor don't have to know anything about it. They just submit work to the executor, receive a future, and at some later point, retrieve the result (blocking if necessary) from the future. If you want to experiment with other configurations, you change the one line defining the executor and run your code again. Even if you decide you need to use different parallelism models for different sections of your code, refactoring to add a second executor and change some of the users of the first executor to use the second is easy compared to manually rewriting the threading details of every site; as long as the executor's name is (relatively) unique, finding users and changing them to use a different one is pretty easy. Executors both simplify your code (by avoiding intermingling thread creation/management with the tasks the threads do) and simplify performance testing.
As a side-benefit, you also abstract away the complexities of transferring data into and out of a worker thread (the submit method encapsulates the former, the future's result method encapsulates the latter). std::async gets you some of this benefit, but with no real control over the parallelism involved (just a yes/no/maybe choice of whether to force a thread, force deferred execution in the current thread, or let the compiler/library decide, with no fine grained control over whether a thread pool is used, and if so, how it behaves). A true executor framework gives you the control std::async fails to provide, with similar ease of use.

c++ watchdog for 3rd party lib calls

I have a problem with long running boost::regex_match(...) invocation in a threaded process environment. But it could be another lib (API call) having the same problem.
Is there a generic way to set up a watchdog for such?
For non-threaded process alarm() can be used to detect timeout.
However, signals don't play nicely with threads. I can avoid direct use of alarm() in the thread and delegate timer mgt. to a dedicated separate thread and let that one use pthread_kill(...) to address the correct threads (this is just an idea - i didn't yet verify that part).
However, also this only interrupts and detects the situation, but cannot gracefully stop boost::regex_match(...).
I played around with Throwing an exception from within a signal handler using sigsetjmp() and siglongjmp() for the thread using boost::regex_match(..).
But it causes memory leaks in boost::regex_match(...) becausesiglongjmp()` bypasses destructors.
How can i gracefully stop a 3rd party API call - presuming that it's implemented exception safe?
Or does it have to be supported by some "stoppable" feature actively implemented in the 3rd party API? (is there some for the boost library?)
Maybe some strange idea, but:
Code can be implemented to be "thread-safe" and/or "exception-safe".
Would it be an option to define "longjmp-safe"? This could be done by passing an additional token to a lib to let is associate all resource allocations to that token. After longjmp() the client SW could ask the API separately to release those resources.
simpler maybe would just be some central init()/release() or register()/unregister() API call, by which the API could clean-up itself.
In a case where you have to:
monitor exceeding execution time
stop execution of processing
you should simply think for tasks instead of threads.
Using threads is something which sounds like "state of the art" but in practice tasks are very often the better way of implementation. Especially for controlling memory leeks in "undefined" end of execution, confine unwanted memory excess and control stack overruns etc.
In the case you have mentioned I tend to implement that as tasks. IPC works well on all known platforms but is not portable. If portability is no problem, changing to a task based solution is not a big deal.
A hanging task can be killed by a os call and all locks, memory and other resources like ipc/shared memory/pipes etc. will be removed automatically. So this fits much better to your problem and it did not depend on your external and maybe unchangeable third party components.

Managing agent thread pools in Clojure

Is there a way to control the thread pools which handle the functions which get sent to agents? As I understand things, if I send-off, underneath the hood I'm using an unbounded thread pool. I would like to, say, run some functions on one thread pool and other functions on another. The reason for this is say I have a some functions which do IO and which are also less important. I'd throw these on some bounded thread pool and wouldn't worry if there was excessive blocking and they stacked up since they're, well, less important. The main thing is that I wouldn't want their crappy IO blocking to say have an effect on some more important functions which are running on another thread pool.
I'm basing the question off of something similar I did with thread pools in Akka and I'm just wondering I can accomplish the same thing with Clojure.
For Clojure versions up to 1.4:
You cannot replace the built-in agent send and send-off thread pools. They are hard-coded in Agent.java.
The send pool (used for computation) is fixed size = 2 + Runtime.getRuntime().availableProcessors().
The send-off pool (also used for futures) is a cached thread pool and will grow without bound. This allows an arbitrary number of background tasks to wait for I/O. The cached threads will be reused and discarded if they've been idle for one minute.
If you want to manage work on your own thread pool, you'll need to dip into java.util.concurrent or use a Clojure wrapper lib.
For Clojure 1.5 (upcoming):
You can supply your own ExecutorService using (send-via executor a f), the default threadpools are not hard-wired anymore. See Agent.java in Clojure 1.5+ for details.
The Clojure library Claypoole is designed for exactly this. It lets you define threadpools and use (and reuse) them for futures, pmaps, and so on.
Amit Rathore (of Runa inc), has published a library (called medusa) for managing thread pools. It looks like a fairly close match for what you are looking for.
http://s-expressions.com/2010/06/08/medusa-0-1-a-supervised-thread-pool-for-clojure-futures-2/

Concurrency within Java EE environment

Goal
My goal to better understand how concurrency within Java EE environment and how can I better consume it.
General questions
Let's take typical servlet container (tomcat) as example. For each request it uses 1 thread to process it. Thread pool is configured so, that it can have max 80 threads in pool. Let's also take simple webapp - it makes some processing and DB communication during each request.
At peak time I can see 80 parallel running threads (+ several other infrastructure threads). Let's also assume I running it in 'm1.large' EC2 instance.
I don't think that all these threads can really run in parallel on this hardware. So now scheduler should decide how better to split CPU time between them all. So the questions are - how big is scheduler overhead in this case? How can I find right balance between thread amount and processing speed?
Actors comparison
Having 80+ threads on 4 core CPU doesn't sound healthy to me. Especially if most of them are blocked on some kind of IO (DB, Filesystem, Socket) - they just consume precious resources. What if we will detach request from thread and will have only reasonable amount of threads (8 for instance) and will just send processing tasks to them. Of course in this case IO should be also non-blocking, so that I receive events when some data, that I need, is available and I send event, if I have some results.
As far as I understand, Actor model is all about this. Actors are not bound to threads (at least in Akka and Scala). So I have reasonable thread pool and bunch of actors with mailboxes that contain processing tasks.
Now question is - how actor model compares to traditional thread-per-request model in terms of performance, scheduler overhead and resources (RAM, CPU) consumption?
Custom threads
I have some requests (only several) that take too much time to process. I optimized code and all algorithms, added caches, but it still takes too much time. But I see, that algorithm can be parallelized. It fits naturally in actor model - I just split my big task in several tasks, and then aggregate results somehow (if needed). But in thread-per-request model I need spawn my own threads (or create my small thread pool). As far as I know, it's not recommended practice within Java EE environment. And, from my point of view, it doesn't fits naturally in thread-per-request model. Question arise: how big my thread pool size should be? Even if I will make it reasonable in terms of hardware I still have this bunch of threads managed by servlet container. Thread management becomes decentralized and goes wild.
So my question - what is the best way to deal with these situations in thread-per-request model?
Having 80+ threads on 4 core CPU doesn't sound healthy to me. Especially if most of them are blocked on some kind of IO (DB, Filesystem, Socket) - they just consume precious resourecs.
Wrong. Exactly in this scenario the processors can handle many more threads than the number of individual cores, since most of the threads at any point in time are blocked waiting for I/O. Fair enough, context switching takes time, but that overhead is usually irrelevant compared to file/network/DB latency.
The rule of thumb that the number of threads should be equal - or a little more than - the number of processor cores applies only for computation-intensive tasks when the cores are kept busy most of the time.
I have some requests (only several) that take too much time to process. I optimized code and all algorithms, added caches, but it still takes too much time. But I see, that algorithm can be parallelized. It fits naturally in actor model - I just split my big task in several tasks, and then aggregate results somehow (if needed). But in thread-per-request model I need spawn my own threads (or create my small thread pool). As far as I know, it's not recommended practice within Java EE environment.
Never heard about that (but I don't claim myself to be the ultimate Java EE expert). IMHO there is nothing wrong in executing tasks associated with a single request parallelly using e.g. a ThreadPoolExecutor. Note that these threads are not request handling threads, so they don't directly interfere with the thread pool used by the EJB container. Except that they compete for the same resources of course, so they may slow down or completely stop other request processing threads in a careless setup.
what is the best way to deal with these situations in thread-per-request model?
In the end, you can't escape measuring concurrent performance and fine-tuning the size of your thread pool and other parameters for your own specific environment.
The whole point of Java EE is to put common architectural concerns like security, state, and concurrency into the framework and let you provide the bits of business logic or data mappings along with the wiring to connect them. As such, Java EE intentionally hides the nasty bits of concurrency (locking to read/write mutable state) in the framework.
This approach lets a much broader range of developers successfully write correct applications. A necessary side effect though is that these abstractions create overhead and remove control. That's both good (in making it simple and encoding policies as policies not code) and bad (if you know what you're doing and can make choices impossible in the framework).
It is not inherently bad to have 80 threads on a production box. Most will be blocked or waiting on I/O which is fine. There is a (tunable) pool of threads doing the actual computation and Java EE will give you external hooks to tune those knobs.
Actors are a different model. They also let you write islands of code (the actor body) that (can) avoid locking to modify state. You can write your actors to be stateless (capturing the state in the recursive function call parameters) or hide your state completely in an actor instance so the state is all confined (for react style actors you probably still need to explicitly lock around data access to ensure visibility on the next thread that runs your actor).
I can't say that one or the other is better. I think there is adequate proof that both models can be used to write safe, high-throughput systems. To make either perform well, you need to think hard about your problem and build apps that isolate parts of state and the computations on each kind of state. For code where you understand your data well and have a high potential for parallelism I think models outside Java EE make a lot of sense.
Generally, the rule of thumb in sizing compute-bound thread pools is that they should be approximately equal to N of cores + 2. Many frameworks size to that automatically. You can use Runtime.getRuntime().availableProcessors() to get N. If your problem decomposes in a divide-and-conquer style algorithm and the number of data items is large, I would strongly suggest checking out fork/join which can be used now as a separate library and will be part of Java 7.
As far as how to manage this, you're not supposed to spawn threads as such inside Java EE (they want to control that) but you might investigate sending a request to your data-crunching thread pool via a message queue and handling that request via a return message. That can fit in the Java EE model (a bit clumsily of course).
I have a writeup of actors, fork/join, and some other concurrency models here that you might find interesting: http://tech.puredanger.com/2011/01/14/comparing-concurrent-frameworks/

How to design multithreaded application

I have a multithreaded application. Each module is executed in a separate thread.
Modules are:
- network module - used to receive/send data from network
- parser module - encode/decode network data to internal presentation
- 2 application module - perform some application logic on the above data one after other
- counter module - used to gather statistics from other modules
- timer module - used to schedule timers
- and much more ...
All threads using message queues for inter thread communication (std::deque sync by conditional variable and mutex).
Some modules are used by others ones (e.g. all modules use timer and counter) and this for each message received from network wich should be handled in very high rates.
This is pretty complex application and the design looks "reasonable". From other hand, I'm not sure that such design, thread per module, is the "best" one? In particular, I'm afraid that such design "encorage" a lot of context switches.
What do you think?
Is there're any good guidelines or open source project to learn from how to do "correct" design of threaded application?
Thread-per-function designs are just naive: they assume that by separating tasks - by module - onto threads, that some kind of scalability will be achieved.
This kind of design is inefficient, as very few task breakdowns yield exactly as many tasks as there are CPUs.
Far more rational designs are to break tasks down into 'jobs' - and then use thread pooling mechanisms to dispatch those jobs.
Advantages over the thread-per-module approach:
Thread pools take advantage of all cores. with thread-per-module if you have modules < cores you have cores sitting idle.
Thread pools minimize contention and resources by maintaining a parity between active threads, and cores. with thread-per-module, if modules > cores you incur needless extra context switches and (on some platforms) each thread exhausts other limited per process resources (like virtual memory).
Thread pools let a "module" do multiple jobs at a time. thread-per-module means that the busiest module still only gets one core.
I wouldn't call myself an expert an multi-threaded design. But I've at least worked with threads enough to have run into various issues trying to design them to work together (communication, locking resources, waiting for threads to end, etc).
At this point, my general rule of thumb is that I must justify the existence of each new thread. For example, if the network layer I'm using provides both a synchronous and an asynchronous API, can I really justify making the network code use synchronous calls in a new thread instead of just using the asynchronous calls in the main thread? In your case, how many modules actually need a thread of their own for a specific reason. Are there any that could instead just be called in turn from the main thread?
If some threads have no good reason for existing, then you might be able to save yourself some trouble and complexity by just putting that module in the main thread.
Now of course, there are good justifiable reasons for putting things in threads. Such as making synchronous calls that may block for a long time, keeping a GUI thread responsive while performing a long task, or being able to take advantage of parallel processing of a large task on a multi-core system.
I don't know of any particular "correct" way to do it. A lot of it really comes down to the details of what your application is actually supposed to do.
A good guideline is to put operations that might block (such as I/O) in its own thread. Your network module is a definite candidate here. Have your network thread use select (I assume UNIX here) to block on input.
Asynchronous events are good in separate threads as well. Your timer module looks like a good candidate here.
You might want to put your other modules in one thread to decrease complexity of your application. BUT, you might want to split them up if you have a multi-processor system.
Have a good strategy for locking resources and mutex handling to prevent deadlocks. A dependency graph (using a whiteboard!) might help here to get your design correct.
Good luck! Sounds like a complex system which will cause many hours of fun development!
For what platform?
For instance a Win32 applications the best model for back-end servers (like yours seems to be) is the thread pool and IO Completion Port. This is not just some hear say and opinion, there are strong facts behind this claim. Rick Vicik of the Windows Performance team has posted a series of articles describing in greater detail why high end servers need to follow this model, see High Performance Windows Programs.
There are other factors that come into play, like for instance the typo of protocol your network module has to handle. Request-Response protocols are often handled by one-thread-per-request metaphor and they do well enough, but high-throughput high-scale protocols don't fare well in that model, specifically because of boxcaring requirements.
Ultimately, whether your design is sound or not is hard to tell just from this brief description. Personally I tend o favor an IO completion driven threading model, as opposed to logical-module driven one, but that's just me.
Just to add to the other answers, lets reason every single thread in your dessign:
network module
Accepted.
parser module + 2 application module
Are you sure that these 3 threads can't be merged into one, main data processing thread? If that were the case, you could then benefit of a thread pool like others sugested, having this processing performed by N threads.
timer module
This one probably is reasonable in most platforms, as you will need a message processing loop to dispatch timer events. Also, if you ever need a GUI that could be the place.
counter module
This is the one that most annoys me. I can't find the reason for having a separate thread for this. Depending on how much you increment it, it will be a nice bottleneck for the application.
I'll suggest keeping separate counters in each thread and poll(message queue) for them when you need it.
and much more ...
Hope not!