How does Akka decide which Actor gets the Thread? - akka

Say we have three Akka Actors, A, B and C running on a dispatcher with only one Thread and the following happens:
A receives a message and starts processing it
In the meantime a message is sent to B and simultaneously a message is sent to C. Since there are no Threads available, both B and C put these messages in their mailbox
A now finishes processing its message and has no more messages in its mailbox, so releases the Thread back into the pool
B or C now both need this Thread. Are there any guarantees for which will be put on the Thread first?
How does Akka make this decision? Does it round robin on all Actors in the ActorSystem?
Is this decision configurable?
Can I say prioritise Actor C to get Threads before Actor B in these situations?

The whole reason why to use Akka is to not have to deal with this sort of thing. You do not want (or need) to prioritize actors in a way like this. Internal dispatcher logic is complex and very well optimized to process tasks as fast possible. Prioritization should be done through other means - like routers, or priority mailboxes.
Now to answer your question: the default dispatcher is backed by a blocking queue - the actor who first received the message in his mailbox will be selected first.

Related

Is it possible to prioritize (give a priority) to specific Akka's Actor?

I've made my research about Akka Framework,
And I would like to know ;
Is it possible to give a priority to a specific actor?
I mean - actors are working while getting a "let" message from the queue,
Is there an option to let an actor work even when it's not his turn yet to work?
Effectively, yes.
One of the parts of your Actor configuration is which Dispatcher those actors will use. A dispatcher is what connects the actor to the actual threads that will execute the work. (Dispatchers default to ForkJoinPools, but can also be dedicated thread pools or even threads dedicated to a specific actor.)
So the typical way you give an Actor "priority" is to give it a dedicated dispatcher, and thereby dedicated threads. For example, Akka itself does this for its internal messages: they run on a dedicated dispatcher so that even you deploy a bunch of poorly written actors that block the threads, Akka itself can still function.
I put "priority" in quotes, because you aren't guaranteeing a specific order of processing. (There are other ways to do that, but not across Actors.) But you are solving the case where you want specific actors to always have a greater access to resources and/or specific actors to get executed promptly.
(In theory, you could take this even further and create a ThreadPoolExecutor with higher priority threads, and then create a Dispatcher based on that ThreadPoolExecutor. That would truly give OS-level priority to an Actor, but that would only be likely relevant in very unusual circumstances.)
EDIT TO RESPOND TO "do mailboxes and dispatchers are the same" [sic]?
No. Each actor has a mailbox. So sometimes we talk about the behavior of mailboxes when discussing the behavior of actors, as the behavior of the mailbox governs the ordering of the actor's message processing.
But dispatchers are a distinct concept. Actors have a dispatcher, but it is many to one. (i.e. each Actor has one mailbox, but there may be many actors associated with a single dispatcher.)
For example, a real world situation might be:
System actors are processed by the internal dispatcher. To quote the docs "To protect the internal Actors that are spawned by the various Akka modules, a separate internal dispatcher is used by default." i.e. no matter how badly screwed up your own code might be, you can't screw up the heartbeat processing and other system messages because they are running on their own dispatcher, and thus their own threads.
Most actors (millions of them perhaps) are processed by the default dispatcher. Huge numbers of actors, as long as they are well behaved, can be handled with a tiny number of threads. So they might all be configured to use the default dispatcher.
Badly behaved actors (such as those that block) might be configured to be processed by a dedicated "blocking" dispatcher. By isolating blocking dispatchers into a separate dispatcher they don't impact the response time of the default dispatcher.
Although I don't see this often, you might also have a dispatcher for extremely response time sensitive actors that gives them a dedicated thread pool. Or even a "pinned" dispatcher that gives an actor a dedicated thread.
As I mentioned this isn't really "priority", this is "dedicated resources". Because one of the critical aspects of actors is that the are location independent. So if Actor A is on Node A, and Actor B is on Node B, I can't guarantee that Actor A will ALWAYS act first. Because doing so would involve an ASTRONOMINCAL amount of overhead between nodes. All I can reasonably do is give Actor A dedicated resources so that I know that Actor A should always be able to act quickly.
Note that this is what the internal dispatcher does as well. We don't guarantee that heartbeat messages are always processed first, but we do make sure that there are always threads available to process system messages, even if some bad user code has blocked the default dispatcher.

Actors, ForkJoinPool, and ordering of messages

I need help understanding how an Actor system can use ForkJoinPool and maintain ordering guarantees.
I have been playing with Actr https://github.com/zakgof/actr which is a simple small actor system. I think my question applies to Akka as well. I have a simple bit of code that sends one Actor numbers 1 to 10. The Actor just prints the messages; and the messages are not in order. I get 1,2,4,3,5,6,8,7,9,10.
I think this has to do with the ForkJoinPool. Actr wraps a message into a Runnable and sends it to the ForkJoin Executor. When the task executes it puts the message onto the destination Actor's queue and processes it. My understanding of ForkJoinPool is that tasks are distributed to multiple threads. I've added logging and the messages 1,2,3,... are being distributed to different threads and the messages are put on to the Actor's queue out of order.
Am I missing something? Actr's Scheduler is similar to Akka's Disapatcher and it can be found here: https://github.com/zakgof/actr/blob/master/src/main/java/com/zakgof/actr/impl/ExecutorBasedScheduler.java
The ExecutorBasedScheduler is constructed with a ForkJoinPool.commonPool like so:
public static IActorScheduler newForkJoinPoolScheduler(int throughput) {
return new ExecutorBasedScheduler(ForkJoinPool.commonPool(), throughput);
}
How can an Actor use ForkJoinPool and keep messages in order?
I can't speak to Actr at all, but in Akka the individual messages are not created as ForkJoinPool tasks. (One task per message seems like a very bad approach for many reasons, not just ordering issues. Namely that messages can typically be processed very quickly and if you had one task per message the overhead would be awfully high. You want to have some batching, at least under load, so that you get better thread locality and less overhead.)
Essentially, in Akka, the actor mailboxes are queues within an object. When a message is received by the mailbox it will check if it has already scheduled a task, if not, it will add a new task to the ForkJoinPool. So the ForkJoinPool task isn't "process this message", but instead "process the Runnable associated with this specific Actor's mailbox". Some period of time then obviously passes before the task gets scheduled and the Runnable runs. When the Runnable runs, the mailbox may have received many more messages. But they will just have been added to the queue and the Runnable will then just process as many of them as it is configured to do, in the order in which they were received.
This is why, in Akka, you can guarantee the order of messages within a mailbox, but cannot guarantee the order of messages sent to different Actors. If I send message A to Actor Alpha, then message B to Actor Beta, then message C to Actor Alpha, I can guarantee that A will be before C. But B might happen before, after, or at the same time as A and C. (Because A and C will be handled by the same task, but B will be a different task.)
Messaging Ordering Docs : More details on what is guaranteed and what isn't regarding ordering.
Dispatcher Docs : Dispatchers are the connection between Actors and the actual execution. ForkJoinPool is only one implementation (although a very common one).
EDIT: Just thought I'd add some links to the Akka source to illustrate. Note that these are all internal APIs. tell is how you use it, this is all behind the scenes. (I'm using permalinks so that my links don't bitrot, but be aware that Akka may have changed in the version you are using.)
The key bits are in akka.dispatch.Dispatcher.scala
Your tell will go through some hoops to get to the right mailbox. But eventually:
dispatch method gets called to enqueue it. This is very simple, just enqueue and call the registerForExecution method
registerForExecution This method actually checks to see if scheduling is needed first. If it needs scheduling it uses the executorService to schedule it. Note that the executorService is abstract, but execute is called on that service providing the mailbox as an argument.
execute
If we assume the implementation is ForkJoinPool, this is the executorService execute method we end up in. Essentially we just create a ForkJoinTask with the supplied argument (the mailbox) as the runnable.
run The Mailbox is conveniently a Runnable so the ForkJoinPool will eventually call this method once scheduled. You can see that it processes special system messages then calls processMailbox then (in a finally) calls registerForExecution again. Note that registerForExecution checks if it needs scheduling first so this isn't an infinite loop, it's just checking if there are is remaining work to do. While we are in the Mailbox class you can also look at some of the methods that we used in the Dispatcher to see if scheduling is needed, to actually add messages to the queue,etc.
processMailbox Is essentially just a loop over calling actor.invoke except that it has to do lots of checking to see if it has system messages, if it's out of work, if it's passed a threshold, if it has been interrupted, etc.
invoke is where the code you write (the receiveMessage) actually gets called.
If you actually click through all of those links you'll see that I'm simplifying a lot. There's lots of error handling and code to make sure everything is thread safe, super efficient, and bulletproof. But that's the gist of the code flow.

Does Akka ask pattern get rid of timed out message on target actor's mailbox?

Assume that I have 2 actors, A and B, A ask B for a response within a 10 seconds timeout. After that B still busy processing other messages which cannot response for A within 10 seconds, so Future in actor A get timeout exception.
The questions are:
1) After actor A got exception, does the message that A send to B still be inside B's mailbox waiting for B to process it?
2) If yes, how to prevent/detect B form overwhelm of messages?
The answer to this is that yes, the message is still in B's box. This is quite a common misconception about Futures in many contexts like the ask pattern, where you have some request going on represented by a future that times out after a limited amount of time; most people would assume that this would mean that when the process being awaited would be cancelled but there is no way that a generic abstraction like a Future could do it (also remember that Future ≠ Thread).
As for the second question the answer is that there is no easy answer and you need to think about a strategy that fits your environment. If all your actors reside in the same VM you could embed timestamps in your messages to validate them on the target actor; this would still be racy as the actor B could take enough time to process the message to trigger the timeout on the A side, generating inconsistency (i.e. A thinks that the operation failed, B things that it succeeded).
Otherwise, in a distributed environment, an idea could be to send a compensating action to the actor B when the actor A detects a timeout. You may want to have a look at the Saga Pattern (you can find a nice talk here) if you are doing such a distributed system based on actors and you are having troubles with this sort of situations. In any case, just remember that distributed systems are hard and there is no silver bullet or any way around studying hard to get them right.

When is it safe to block in an Akka 2 actor?

I know that it is not recommended to block in the receive method of an actor, but I believe it can be done (as long as it is not done in too many actors at once).
This post suggests blocking in preStart as one way to solve a problem, so presumably blocking in preStart is safe.
However, I tried to block in preRestart (not preStart) and everything seemed to just hang - no more messages were logged as received.
Also, in cases where it is not safe to block, what is a safe alternative?
It's relatively safe to block in receive when:
the number of blocked actors in total is much smaller than the number of total worker threads. By default there are ten worker threads, so 1-2 blocked actors are fine
blocking actor has its own, dedicated dispatcher (thread pool). Other actors are not affected
When it's not safe to block, good alternative is to... not block ;-). If you are working with legacy API that is inherently blocking you can either have a separate thread pool maintained inside some actor (feels wrong) or use approach 2. above - dedicate few threads to a subset of actors that need to block.
Never ever block an actor.
If your actor is part of an actor hierarchy (and it should be), the actor system is not able to stop it.
The actor's life-cycle (supervision, watchig etc.) is done by messaging.
Stopping a parent actor of a blocking child will not work.
Maybe there are ways to couple the blocking condition with the actor's lifecycle.
But this would lead to overload of complications and bad-style.
So, the best way is to do the blocking part outside of that actor.
E.g. you could run the blocking code via an executor service in a separate thread.

responsively checking two queues without pegging CPU

I have a thread pool system which uses message passing to organize events, and I am also using the Windows API which also does a bit of message passing. So essentially I need to use the functions which check for the presence of messages without blocking. If I block (if I use GetMessage I think it will block) while checking either queue, I may miss any incoming messages on the other queue.
The first solution I know of is to Sleep a couple of miliseconds somewhere during my loop of peeking on both queues.
Another way I can think of is to have an additional thread, so that now I have one for each loop I am listening to. I make it not responsible for doing anything other than running the windows message loop, then use it to process and forward any events to my own message queue for the event to be handled. But this won't work if Windows specifically sends the messages i'm interested in to the original thread.
Are there other good solutions?
Your requirement is a bit unclear, but I can agree that Windows message queues are awkward in that only one thread can wait on them. Windows binds windows to threads, and only the thread that creates a window can interact with it.
If you have user-defined messages that contain work to processed by to your thread pool, I suggest that you do exactly what you suggest in your question - use one thread to process all the Windows messages, (GetMessage() loop), requeue any work that turns up to your thread pool input queue and handle 'normal' Windows messages with the usual Translate/Dispatch mechanism.
If you need more help, could you describe more clearly the flow of Windows messages and/or work objects through your system? It is not obvious where the work for the thread pool comes from and how it is transported, (if forced to use a WMQ, I usually postMessage a reference in wParam/lParam, but your system?).
Rgds,
Martin
Normally, a thread pool would not be involved in the Windows message loop, and blocking indefinitely when there is no work is not only allowable for a worker thread, but even desirable.
The most elegant way of implementing a thread pool that can receive messages via some kind of queue, which automatically keeps all CPU cores busy, and which as a bonus is very efficient, is using a completion port.
CreateIoCompletionPort with a null handle will create a completion port and return the handle. Passing zero as NumberOfConcurrentThreads tells the operating system to keep as many threads running as there are cores available.
Create any number of worker threads (a few more than you have cores) and CreateIoCompletionPort with the handle returned by the first call. That will bind the workers to this completion port. Now call GetQueuedCompletionStatus with INFINITE timeout on every worker, that will block them indefinitively.
Make a struct which has an OVERLAPPED as the first member, plus any data that you want to hand as a task (some pointers to data, or anything).
For every task, set up one of your message structs, and PostQueuedCompletionStatus to the completion port handle. At application exit, post null. You can use the dwNumberOfBytesTransferred field (and the completion key) to pass some additional info.
Now Windows will wake one thread for every message you posted, in last-in-first-out order, up to the number of cores available. If one of the workers blocks on IO, Windows will wake another one for another task (keeping the CPU busy as long as there is work to do).
After finishing a task, go back to GetQueuedCompletionStatus.
A way to gracefully terminate all workers is to pass "zero bytes transferred" and have the worker re-post the event, and exit if it encounters that.
I am not an expert on windows queues, but I am nearly certain there has to be an asynchronous event driven mechanism for message passing.