In Akka, what happens if I watch() a dead ActorRef? - akka

If I call context.watch() on an ActorRef that is already dead, am I guaranteed to still receive a termination message?
Also, after having received a termination message regarding a specific actor, do I still need to call unwatch()?
Also, are watch() calls reference counted? If I call watch() twice, followed by unwatch() once, am I guaranteed to still get termination messages?

I think the documentation is pretty clear:
"One important property is that the message will be delivered
irrespective of the order in which the monitoring request and targetโ€™s
termination occur, i.e. you still get the message even if at the time
of registration the target is already dead."
http://doc.akka.io/docs/akka/2.0.1/general/supervision.html
And, you do not need to unwatch since the actor can't die twice, and it's not reference counted. It's binary.
Cheers,
โˆš

Related

Can an MPI_WAITALL call be skipped if all requests are already complete?

Let's say I have an array of non-blocking MPI requests initiated in a series of calls to MPI_ISEND (or MPI_IRECV). I store the request info in the array xfer_rqst. It is my understanding that when a request completes, its corresponding value in xfer_rqst will be set to MPI_REQUEST_NULL. Further, it's my understanding that if a call is made to MPI_WAITALL, it will have work to do only if some requests in xfer_rqst still have some value other than MPI_REQUEST_NULL. If I have it right, then, if all requests complete before we get to the MPI_WAITALL call, then MPI_WAITALL will be a no-op for all intents and purposes. If I am right about all this (hoo boy), then the following Fortran code should work and maybe occasionally save a useless function call:
IF (ANY(xfer_rqst /= MPI_REQUEST_NULL)) &
CALL MPI_WAITALL(num_xfers, xfer_rqst, xfer_stat, ierr)
I have actually run this several times without an issue. But still I wonder, is this code proper and safe?
It is indeed often the case that you can decide from the structure of your code that certain requests are satisfied. So you think you can skip the wait call.
And indeed you often can in the sense that your code will work. The only thing you're missing is that the wait call deallocates your request object. In other words, by skipping the wait call you have created a memory leak. If your wait call is in a region that gets iterated many times, this is a problem. (As noted in the comments, the standard actually states that the wait call is needed to guarantee completion.)
In your particular case, I think you're wrong about the null request: it's the wait call that sets the request to null. Try it: see if any requests are null before the wait call.

When can an initial message be reordered with the initial creation of an actor?

In Akka, message ordering is only guaranteed between a "given pair of actors" (source).
Same page also states "this rule is not transitive" and gives a theoretical example of actor A sending message M1 to actor C, then sends message M2 to actor B who forwards it to actor C. And so, actor C can receive M1 and M2 in any order.
The page also highlights how creating an actor is the same as sending a message to it and thus, how any other foreign actor could end up sending messages to an actor that does not yet exist:
Actor creation is treated as a message sent from the parent to the
child, with the same semantics as discussed above. Sending a message
to an actor in a way which could be reordered with this initial
creation message means that the message might not arrive because the
actor does not exist yet. [...] An example of well-defined ordering is
a parent which creates an actor and immediately sends a message to it.
So far so good. The problem is that from what I can gather (I am new to Akka!), the most commonly used pattern in Akka code is to have an actor spawn children and then their shared parent initiates the communication between the children, and so, this is not "well-defined ordering". Is a majority of the world's applications built on Akka fundamentally broken, an accident waiting to happen?
Here's a few examples from Akka themselves.
On this page they have a HelloWorldMain which spawns two children HelloWorld and HelloWorldBot. Then, main calls helloWorld.tell() with HelloWorldBot set as the reply-to reference.
Same page repeats this pattern even more clearly further down in a Main class:
ActorRef<ChatRoom.RoomCommand> chatRoom = context.spawn(ChatRoom.create(), "chatRoom");
ActorRef<ChatRoom.SessionEvent> gabbler = context.spawn(Gabbler.create(), "gabbler");
context.watch(gabbler);
chatRoom.tell(new ChatRoom.GetSession("olโ€™ Gabbler", gabbler));
The first page quoted gives one example where things can go wrong; if "remote-deployed actor R1, send its reference to another remote actor R2 and have R2 send a message to R1". This is the obvious case. But since I kind of don't like writing applications that just maybe won't work as expected, what exactly are the cases that I need to know about? Remoting being just one of them - or, I suspect, probably the only one? And, what pattern can I use to make the initial communication sort of go as planed? For example, should I await an actor-start event before revealing its reference to the next child (if there even is such a "signal" in Akka)?
I understand that no message is guaranteed to be delivered - and from that follows that not even spawning a child is guaranteed to actually start that child lol. But, this is a separate topic. What I am concerned with specifically is the creation of children and that initial communication between them which Akka has clearly stressed should follow a "well-defined" order, yet, seems like in practice no one really do ๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜ฅ
"When the initial message will be reordered compared to the creation?"
Short answer: Never. If you create an actor the ActorRef is valid immediately, can receive messages immediately, and will be in a valid state before it processes messages.
Longer answer: Reordering can happen if you aren't spawning the actor directly. Because, as you note, the ordering guarantee only applies from one given actor to another. Thus if you introduce an intermediary the guarantee no longer applies. One scenario where this might happen is using
remote actor creation, which is why it is mentioned in the docs. But remote actor creation isn't really a typical scenario.
META: I am completely rewriting this answer now to be more succinct now that I think I understand the question better. (If you want to see the original rambling answer, feel free to use the versioning features.)

In akka-actor-typed, what is the difference between Behaviors.setup and Behaviors.receive?

Reading the Akka 2.6.10 API Docs, the difference between akka.actor.typed.scaladsl.Behaviors.setup and akka.actor.typed.scaladsl.Behaviors.receive ought to have been immediately clear to me. But it wasn't.
The documentation site provides some excellent examples, but it still took me a lot of pondering to catch on to the intended purpose for each function, which was never really stated explicitly.
In the hope of saving future Akka (Typed) newbies some time, I will try to clarify the differences between these behavior-defining functions. This is basic stuff, but it's important for understanding the rest.
akka.actor.typed.scaladsl.Behaviors.setup
Behaviors.setup defines behavior that does not wait to receive a message before executing. It simply executes its body immediately after an actor is spawned from it.
You must return a Behavior[T] from it, which might be a message-handling behavior. To do this, you would likely use Behaviors.receive or Behaviors.receiveMessage. (Or else Behaviors.stopped, if the actor is only required to do its work once once, and then disappear.)
Because it executes without waiting, Behaviors.setup is often used to define the behavior for the first actor in your system. Its initial behavior will likely be responsible for spawning the next brood of actors that your program will need, before adopting a new, message-handling behavior.
akka.actor.typed.scaladsl.Behaviors.receive
Behaviors.receive defines message-handling behavior. You pass it a function that has parameters for both the actor context, and an argument containing the message. Actors created from this behavior will do nothing until they receive a message of a type that this behavior can handle.

IOCP: If operation returns immediately with error, can I still receive completion notification?

If FILE_SKIP_COMPLETION_PORT_ON_SUCCESS is not enabled, then even if the operation completes immediately with success, I still get a completion notification on the completion port. I'd like to know if this is the case if it completes immediately with errors as well.
I process completions with handlers that I store as std::function in an extended OVERLAPPED struct, and are executed by the thread pool that is looping on the completion port. Having FILE_SKIP_COMPLETION_PORT_ON_SUCCESS disabled means that I don't have to worry about handlers forming a recursive chain and, worst case, running out of stack space, if the operations often complete immediately. With the skip enabled, the handler for the new operation would have to be called immediately if the operation returns right away.
The issue is that the handlers are supposed to execute both on success and on error. However, I don't know whether if an overlapped Read/Write/WSARecv/WSASend returning immediately with an error would still queue a completion packet, so that I can allow it to be handled in the handler by the thread pool, as in the case of success. Is this doable? Is it something that only applies to certain types of errors and not others? Workarounds?
This knowledge base article says that SUCCESS and ERROR_IO_PENDING result in a completion packet being generated and other results do not.
See Tip 4
Based on this blog from Raymond Chen, all completions will be queued to the completion port even if the operation completes synchronously (successfully or with an error condition).

What is the difference between a blocking and non-blocking read?

Add to the above question the concept of a wait/no wait indicator as a parameter to a ReadMessage function in a TCP/IP or UDP environment.
A third party function description states that:
This function is used to read a message from a queue which was defined by a previous registerforinput call. The input wait/no wait indicator will determine if this function will block on the queue specified, waiting for the data to be placed on the queue. If the nowait option is specified and no data is available a NULL pointer will be returned to the caller. When data available this function will return a pointer to the data read from the queue.
What does it mean for a function to be blocking or non-blocking?
Blocking means that execution of your code (in that thread) will stop for the duration of the call. Essentially, the function call will not return until the blocking operation is complete.
A blocking read will wait until there is data available (or a timeout, if any, expires), and then returns from the function call. A non-blocking read will (or at least should) always return immediately, but it might not return any data, if none is available at the moment.
An analogy if you'll permit me - sorry, it's late in the afternoon and I'm in the mood, if it gets down voted - ah well...
You want to get into a snazzy nightclub, but the bouncer tells you you cannot go in till someone comes out. You are effectively "blocked" on that condition. When someone comes out, you are free to go in - or some error condition such as "are those trainers?" Your night doesn't really kick off till you get in, your enjoyment is "blocked".
In a "non-blocking" scenario, you will tell the bouncer your phone number, and he will call you back when there is a free slot. So now you can do something else while waiting for someone to come out, you can start your night somewhere else and come back when called and continue there...
Sorry if that didn't help...
Take a look at this: http://www.scottklement.com/rpg/socktut/nonblocking.html
Here's some excerpts from it:
'By default, TCP sockets are in "blocking" mode. For example, when you call recv() to read from a stream, control isn't returned to your program until at least one byte of data is read from the remote site. This process of waiting for data to appear is referred to as "blocking".'
'Its possible to set a descriptor so that it is placed in "non-blocking" mode. When placed in non-blocking mode, you never wait for an operation to complete. This is an invaluable tool if you need to switch between many different connected sockets, and want to ensure that none of them cause the program to "lock up."'
Also, it's generally a good idea to try to search for an answer first (just type "blocking vs. non-blocking read" in a search engine), and then once you hit a wall there to come and ask questions that you couldn't find an answer to. The link I shared above was the second search result. Take a look at this great essay on what to do before asking questions on internet forums: http://www.catb.org/~esr/faqs/smart-questions.html#before
In your case, it means the function will not return until there actually is a message to return. It'll prevent your program from moving forward, but when it does move forward you'll have a message to work with.
If you specify nowait, a null pointer will be returned immediately if there are no messages on the queue, which allows you to process that situation.