When can an initial message be reordered with the initial creation of an actor? - akka

In Akka, message ordering is only guaranteed between a "given pair of actors" (source).
Same page also states "this rule is not transitive" and gives a theoretical example of actor A sending message M1 to actor C, then sends message M2 to actor B who forwards it to actor C. And so, actor C can receive M1 and M2 in any order.
The page also highlights how creating an actor is the same as sending a message to it and thus, how any other foreign actor could end up sending messages to an actor that does not yet exist:
Actor creation is treated as a message sent from the parent to the
child, with the same semantics as discussed above. Sending a message
to an actor in a way which could be reordered with this initial
creation message means that the message might not arrive because the
actor does not exist yet. [...] An example of well-defined ordering is
a parent which creates an actor and immediately sends a message to it.
So far so good. The problem is that from what I can gather (I am new to Akka!), the most commonly used pattern in Akka code is to have an actor spawn children and then their shared parent initiates the communication between the children, and so, this is not "well-defined ordering". Is a majority of the world's applications built on Akka fundamentally broken, an accident waiting to happen?
Here's a few examples from Akka themselves.
On this page they have a HelloWorldMain which spawns two children HelloWorld and HelloWorldBot. Then, main calls helloWorld.tell() with HelloWorldBot set as the reply-to reference.
Same page repeats this pattern even more clearly further down in a Main class:
ActorRef<ChatRoom.RoomCommand> chatRoom = context.spawn(ChatRoom.create(), "chatRoom");
ActorRef<ChatRoom.SessionEvent> gabbler = context.spawn(Gabbler.create(), "gabbler");
context.watch(gabbler);
chatRoom.tell(new ChatRoom.GetSession("ol’ Gabbler", gabbler));
The first page quoted gives one example where things can go wrong; if "remote-deployed actor R1, send its reference to another remote actor R2 and have R2 send a message to R1". This is the obvious case. But since I kind of don't like writing applications that just maybe won't work as expected, what exactly are the cases that I need to know about? Remoting being just one of them - or, I suspect, probably the only one? And, what pattern can I use to make the initial communication sort of go as planed? For example, should I await an actor-start event before revealing its reference to the next child (if there even is such a "signal" in Akka)?
I understand that no message is guaranteed to be delivered - and from that follows that not even spawning a child is guaranteed to actually start that child lol. But, this is a separate topic. What I am concerned with specifically is the creation of children and that initial communication between them which Akka has clearly stressed should follow a "well-defined" order, yet, seems like in practice no one really do 😂😂😥

"When the initial message will be reordered compared to the creation?"
Short answer: Never. If you create an actor the ActorRef is valid immediately, can receive messages immediately, and will be in a valid state before it processes messages.
Longer answer: Reordering can happen if you aren't spawning the actor directly. Because, as you note, the ordering guarantee only applies from one given actor to another. Thus if you introduce an intermediary the guarantee no longer applies. One scenario where this might happen is using
remote actor creation, which is why it is mentioned in the docs. But remote actor creation isn't really a typical scenario.
META: I am completely rewriting this answer now to be more succinct now that I think I understand the question better. (If you want to see the original rambling answer, feel free to use the versioning features.)

Related

What's the most common way to translate UML signals to C++?

what's the most common method to translate UML signals and their receivers into C++? What would be the C++ equivalent of a signal and its receiver?
Is it just a method call at the end of the day?
From the Rational UML documentation:
https://www.ibm.com/docs/en/rational-soft-arch/9.7.0?topic=diagrams-signals
In UML models, signals are model elements that are independent of the
classifiers that handle them. Signals specify one-way, asynchronous
communications between active objects. Signals are often used in
event-driven systems and distributed computing environments. For
example, a communications system might contain a Pager class, whose
objects wait for, and respond to, Page signals. Signals differ from
other message types in that when an object receives a signal, the
object does not need to return anything, but reacts to the receipt of
a signal according to the behavior specified by its receptions.
All signals are assumed to have a send( ) operation. A signal’s
attributes represent the data it carries in its send operation.
Signals can have no other operations.
In other words, UML "signals" (and "receivers", "events", etc.) are abstractions that map to your APPLICATION. They are NOT "language constructs" per se.
More specifically, when you app implements a "signal", it might have a C++ function or class method called "send()". Send() might send a Posix signal (e.g. kill()), it might post something to a message queue or any of a million other different possibilities.
In short, if your "design" specifies UML "signals", then your C++ code will depend entirely on whatever "implementation" you've chosen.
In that sense, yes: it IS "just a method call at the end of the day" :)
Though the answer of paulsm4 is already correct I would like to add what the UML authors say about signals. On pp. 167 of UML 2.5 you find:
10.3.3.1 Signals
A Signal is a specification of a kind of communication between objects in which a reaction is asynchronously triggered in the receiver without a reply. The receiving object handles Signals as specified by clause 13.3. The data carried by the communication are represented as attributes of the Signal. A Signal is defined independently of the Classifiers handling it.
The sender of a Signal will not block waiting for a reply but continue execution immediately. By declaring a Reception associated to a given Signal, a Classifier specifies that its instances will be able to receive that Signal, or a subtype thereof, and will respond to it with the designated Behavior.
A Signal may be parameterized, bound, and used as TemplateParameters.
10.3.3.2 Receptions
A Reception specifies that its owning Class or Interface is prepared to react to the receipt of a Signal. A Reception matches a Signal if the received Signal is a specialization of the Reception’s signal. The details of how the object responds to the received Signal depend on the kind of Behavior associated with the Reception and its owning Class or Interface. See 13.2. The name of the Reception is the same as the name of the Signal. A Reception may only have in Parameters (see 9.4.3) that match the attributes of the Signal by name, type, and multiplicity.
Since UML per se is language-agnostic that's all you have. And how any compiler/coder realizes this is completely open.
To the two already excellent answer's, I'd like to add some practical aspects:
Who best than Booch, Rumbaugh and Jacobson, the inventors of UML (and the creators of early Rational tools) could explain what a signal is supposed to be:
A message is a named object that is sent asynchronously by one object and then received by another. A signal is a classifier for messages; it is a message type. (...) Signals have a lot in common with plain classes. (...) The attributes of a signal serve as its parameters. - Booch, Rumbaugh and Jacobson in UML User Guide, 2nd ed. (Chap 21, Events and signals)
In C++ signals would therefore often be represented by classes. A reception would then be represented by a member function with an argument of that class.
A frequent alternative is to represented them with a member function: its parameters are the signal parameters (this approach is used in some frameworks like for example here for events, or here where Q_SLOT represent signal receptions in Qt ). That function should be designed to be called asynchronously. This is often sufficient, but it'd make specialization of signals more tricky and difficult to maintain.
Some additional remarks:
Most classes are not active classes: Since they do not own their process nor their thread, OS level signals are rarely the relevant implementation or receptions. When they are, usually some callback function maps them to the chosen implementation.
Separation of concerns and single responsibility principle made their way through: In the case of a class-based implementation a signal's responsibility would be the message content and not in addition to dispatch itself (Change in the dispatching technology would be another reason to change that infringes SRP).
So the sender would either invoke a sending function of a dispatching class (event queue, which could be as simple as an observer variant or a chain of responsibility), or invoke the "reception" function of the receiver directly (if needed using some asynchronous mechanism like std::future to use async calls or promises).

In akka-actor-typed, what is the difference between Behaviors.setup and Behaviors.receive?

Reading the Akka 2.6.10 API Docs, the difference between akka.actor.typed.scaladsl.Behaviors.setup and akka.actor.typed.scaladsl.Behaviors.receive ought to have been immediately clear to me. But it wasn't.
The documentation site provides some excellent examples, but it still took me a lot of pondering to catch on to the intended purpose for each function, which was never really stated explicitly.
In the hope of saving future Akka (Typed) newbies some time, I will try to clarify the differences between these behavior-defining functions. This is basic stuff, but it's important for understanding the rest.
akka.actor.typed.scaladsl.Behaviors.setup
Behaviors.setup defines behavior that does not wait to receive a message before executing. It simply executes its body immediately after an actor is spawned from it.
You must return a Behavior[T] from it, which might be a message-handling behavior. To do this, you would likely use Behaviors.receive or Behaviors.receiveMessage. (Or else Behaviors.stopped, if the actor is only required to do its work once once, and then disappear.)
Because it executes without waiting, Behaviors.setup is often used to define the behavior for the first actor in your system. Its initial behavior will likely be responsible for spawning the next brood of actors that your program will need, before adopting a new, message-handling behavior.
akka.actor.typed.scaladsl.Behaviors.receive
Behaviors.receive defines message-handling behavior. You pass it a function that has parameters for both the actor context, and an argument containing the message. Actors created from this behavior will do nothing until they receive a message of a type that this behavior can handle.

What benefit do Props bring to actor creation in Akka?

Being new to Akka I need help in understanding in a simple way what the benefit of Props is. What is the problem on having common OO style object creation?
What I know is that this follows the factory pattern where you send the class and the properties in a PROP to a factory and that creates the actor for you. [Correct me if im wrong]?
BUT I fail to see the need and I know that this is fundamental. This is my dilemma.
Can you please help me understand this may be by way of an analogy/code?
I see two advantages to this way of creating actors.
The first one is simple: it gives you a guarantee that when an Actor object is created, it's also properly registered in the actor system (it must have a parent actor to supervise it, gets pushed messages by the dispatcher, etc.). So you never end up with an object which is of type Actor, but actually exists outside of the actor system.
The second one is visible in the definition of the actorOf(props: Props): ActorRef method: it doesn't actually return an Actor, but rather an ActorRef(and the ActorRef doesn't expose a reference to the underlying Actor either).
This means that you never get direct access to the actor itself, and cannot circumvent the Akka API by directly calling methods on the actor, instead of sending async messages. If you built the Actor yourself, you would obviously get direct access to the actor, making it way too easy to access it in ways which break the guarantees of the actor model.
The main reason for the usage of Props in Akka is that the lifecycle of the Actor instances are completely managed by the ActorSystem.
In the simplest case, your Actor will be instantiated once, and then chug along happily. If that was the only use case, then a standard dependency injection approach would probably work.
However, that simplest case is thrown out the window once the Akka supervision mechanisms kicks in. In Akka, when an Actor throws an Exception, a supervision mechanism decides what should happen to that actor. Typically, the supervisor has to decided if:
The exception is expected and doesn't really cause a problem => The actor is resumed and continues operating as normal.
The exception is fatal, and the actor cannot be allowed to continue => The actor is killed, its mailbox is destroyed and all of the messages it contains are sent to the dead letters.
The exception is bad but recoverable => The actor is restarted. This means that the existing Actor instance is discarded but its mailbox is retained. A new, fresh Actor instance is created from the Props, and will start processing the original mailbox.
The Props mechanism is required for handling the restarting case.
I believe that the Props mechanism is also useful when creating a distributed ActorSystem, as it allows instantiating the Actor on another JVM if necessary.
All those cases cannot be handled by a standard factory pattern or by standard dependency injection mechanisms.
I don't think there is any advantage too.
Actually, it's wired for me to understand the API.
why not like this:
ActorRef actorA = system.getRefByCreateActor(ActorA.class, Props.create(......));
NoSender.tell(actorA, message);

Is synchronisation required for DBUS?

I have a doubt on how a DBUS calls are synchronised or made safe? This is more of a conceptual question.
I know that this situation will not arise in SOCKETs. The consumer gets blocked unless the producer has finished its task - recvfrm and sendfrm() in sockets/pipes. Basically, producer and consumer job is well defined here at the two sides of a socket respectively.
Here is a scenario. Actually, I am using QT-DBUS in my project. However, I think that this is not specific to QT DBUS. This question holds true for any DBUS.
There is a function called - "GetTemperatureValue()" of a class. This class gets registered its service with the DBUS - "GetTemperatureValue()". And "GetTemperatureValue()" is invokable from a DBUS Client. Now, if the value returned by "GetTemperatureValue()" is getting changed periodically. Let's say a timer is changing the temperature value very frequently. If in between the client invokes the call - "GetTemperatureValue()" through a DBUS call, then wouldn't it will get a garbage value. Since there is no protection, the temperature value returned could be corrupted. Is there a need for protection here? If protection is required, how can we do so?
Here is a qt-dbus example also - http://www.tune2wizard.com/linux-qt-signals-and-slots-qt-d-bus/

In Akka, what happens if I watch() a dead ActorRef?

If I call context.watch() on an ActorRef that is already dead, am I guaranteed to still receive a termination message?
Also, after having received a termination message regarding a specific actor, do I still need to call unwatch()?
Also, are watch() calls reference counted? If I call watch() twice, followed by unwatch() once, am I guaranteed to still get termination messages?
I think the documentation is pretty clear:
"One important property is that the message will be delivered
irrespective of the order in which the monitoring request and target’s
termination occur, i.e. you still get the message even if at the time
of registration the target is already dead."
http://doc.akka.io/docs/akka/2.0.1/general/supervision.html
And, you do not need to unwatch since the actor can't die twice, and it's not reference counted. It's binary.
Cheers,
√