So seems I have two possibilities to get hold of a child actor instance:
By using context.actorSelection, which returns a Future[ActorRef]
context.actorSelection(actorNameString).resolveOne(2.seconds)
By using context.child, which returns an Option[ActorRef]
context.child(actorNameString)
So which one should I prefer and why?
I know what using actorSelection, I can be async, but what other reasons exist to favour one over the other?
Unless you use remote deployment for your child actors (in which case I wouldn't know what to answer), or you want to get a reference to the child of a child, I don't think you should use get an ActorRef via context.actorSelection to get references to child actors.
context.actorSelection is meant to identify and get references of (multiple) actors running anywhere (on other JVMs/hosts) and that's why it is asynchronous. Sure you can use it to get a hold of a child actor, but if you can go for context.child.
I think that you can see context.child as a specialized version (of context.actorSelection(actorNameString).resolve) for cases where you want to get a single reference to a child actor.
Related
Using akka-typed I'm trying to create an event-sourced application in which a command on an actor can cause effect on another actor. Specifically I have the following situation:
RootActor
BranchActor (it's the representation of a child of the Root)
When RootActor is issued a CreateBranch command, validation happens, and if everything is o.k. the results must be:
RootActor will update the list of children of that actor
BranchActor will be initialized with some contents (previously given in the command)
RootActor replies to the issuer of the command with OperationDone
Right now the only thing I could come up with is: RootActor processes the Event and as a side effect issues a command to the BranchActor, which in turn saves an initialization eventt, replies to the RootActor, which finally replies to the original issuer.
This looks way too complicated, though, because:
I need to use a pipe to self mechanism, which implies that
I need to manage internal commands as well that allow me to reply to the original issuer
I need to manage the case where that operation might fail, and if this fails, it means that the creation of a branch is not atomic, whereas saving two events is atomic, in the sense that either both are saved or neither is.
I need to issue another command to another actor, but I shouldn't need to do that, because the primary command should take care of everything
The new command should be validated, though it is not necessary because it comes from the system and not an "external" user in this case.
My question then is: can't I just save from the RootActor two events, one for self, and one for a target BranchActor?
Also, as a bonus question: is this even a good practice for event-sourcing?
My question then is: can't I just save from the RootActor two events, one for self, and one for a target BranchActor?
No. Not to sound trite, but the only thing you can do to an actor is to send a message to it. If you must do what you are doing you are doing, you are on the right path. (e.g. pipeTo etc.)
is this even a good practice for event-sourcing?
It's not a good practice. Whether it's suboptimal or a flat out anti-pattern is still debatable. (I feel like I say say this confidently because of this Lightbend Discussion thread where it was debated with one side arguing "tricky but I have no regrets" and the other side arguing "explicit anti-pattern".)
To quote someone from an internal Slack (I don't want attribute him without his permission, but I saved it because it seemed to so elegantly sum up this kind of scenario.)
If an event sourced actor needs to contact another actor to make the decision if it can persist an event, then we are not modeling a consistency boundary anymore. It should only rely on the state that [it has] in scope (own state and incoming command). … all the gymnastics (persist the fact that its awaiting confirmation, stash, pipe to self) to make it work properly is an indication that we are not respecting the consistency boundary.
If you can't fix your aggregates such that one actor is responsible for the entire consistency boundary the better practice is to enrich the command beforehand: essentially building a Saga pattern.
I am new to Linux System Commands, and IPC related topics.
I have a child who calculates a given number's factorial, and then passes the result back to the parent. The parent will then print the received output.
I must do this WITHOUT using any kind of PIPES.
At this point I have done a small amount of research on different types of IPC.The two routes I was considering was File Mapping and Mail Slot.
However considering how basic the task is, they all seem too complicated.
What are some simple ways that I could solve this problem?
If your program is forking the children, create a shared location in the parent, and then have the child fill in the result in that space, since all memory is accessible by the parent and child at the time of doing the fork().
I have an Actor that I create from with another Actor (parent). I also spawn several other Actor's from within the parent. The tree looks like:
ParentActor
-- ServiceActor
-- ProcessActor1
-- ProcessActor2
Now I want to pass around this ServiceActor instance to ProcessActor instances, but the problem is that the ServiceActor could choke and be killed at some point. I handle this in my parent and I would have a Restart policy for the ServiceActor.
Now my question is say if I create all my Actors as mentioned above and after a couple of hours the ServiceActor gets Restarted because of an Exception happening, should I re-instantiate my ProcessActor's?
Is the old ServiceActor ActorRef reference still valid?
An ActorRef is valid even if the underlying actor is restarted multiple times. From the official documentation:
The rich lifecycle hooks of Actors provide a useful toolkit to implement various initialization patterns. During the lifetime of an ActorRef, an actor can potentially go through several restarts, where the old instance is replaced by a fresh one, invisibly to the outside observer who only sees the ActorRef.
I am trying to process an event stream which can be "sessionized" into sessions. The plan is to use a pool of actors, where a single actor from the pool would process all events from one session (the reason is I need to maintain some session state). It seems to me that in order for me to achieve this, I would have to keep the ActorRef around for a particular actor which got assigned to a particular session. However, if I am using an actor pool by using:
val randomActor = _system.actorOf(Props[SessionProcessorActor].withRouter(RandomPool(100)), name = "RandomPoolActor")
Then, in this case, the randomActor provides ActorRef to the whole pool, not to the individual actors in the pool. How could I then achieve what I mentioned above?
One way I can think of is to send back the reference after the actor from the pool has been initialized (would probably look something like RandomPoolActor$ab etc.). This method however has a few problems, one of which is I have to use an ask pattern instead of tell, so that I don't miss an event from the same session.
Any other way to achieve this? Any other pattern to adopt?
You could use a ConsistentHashingPool which does something similar to what you are looking for. A ConsistentHashingRouter ensures that every message ends in the same actor based on a hashKey. This key would be your sessionId in your scenario. There is no need to keep ActorRefs or other references to accomplish this.
There are multiple ways of defining your hashKey in your code. I would recommend creating a case class that extends ConsistentHashable. Once done you will be required to implement the method consistentHashKey. Example:
case class HashableEnvelope(yourMsgClass: YourMsgClass) extends ConsistentHashable {
override def consistentHashKey = yourMsgClass.sessionId
}
Then you can define your pool like this:
val pool = system.actorOf(Props[SessionProcessorActor].withRouter(ConsistentHashingPool(100)))
Another thing to mention is that the router will ensure that all messages with the same hashKey will end up in the same actor, however, it does not ensure that a particular actor receives only messages for a given hashKey. It can receive for multiple hashKeys. That should not be a problem, just your SessionProcessorActor should be able to process a few hashKeys instead of just one.
The consistent hashing algorithm will decide which message go to each actor. You can read on wikipedia how it works: https://en.wikipedia.org/wiki/Consistent_hashing. To distribute messages in a more evenly manner you should increase the number of virtual nodes in the configuration (default is 10):
akka.actor.deployment.default.virtual-nodes-factor = 1000
Depending on how many sessionIds and actors you have, you will see that message are getting distributed more evenly.
In this moment I have this actor session management implementation running in only one node:
1) I have a SessionManager actor that handles all sessions
2) The SessionManagerActor receives two messages: CreateSesion(id) and ValidateSesion(id)
3) When the SessionManagerActor receives CreateSesion(id) message, it creates a SessionActor using actorFor method like so:
context.actorOf(Props(new SesionActor(expirationTime)), id)
4) When the SessionManagerActor receives ValidateSesion(id) message it looks for an existing SessionActor and evaluates if exists using resolveOne method like so:
context.actorSelection("akka://system/user/sessionManager/" + id).resolveOne()
With that logic works nice but I need to implement the same behavior in multiple nodes (cluster)
My question is, which method is recommended to implement that session management behavior so that it works in one or múltiple nodes?
I've read akka documentation and it provides akka-remote, akka-cluster, akka-cluster-sharding, akka-cluster-singleton, akka-distributed-publish-subscribe-cluster but I'm not sure about which one is the appropriate and the simplest way to do it. (Note that SessionActors are stateless and I need to locate them anywhere in the cluster.)
Since you have a protocol where you validate whether a session already exists or not and have a time-to-live on the session, this is technically not completely stateless. You probably would not, for example, want to lose existing sessions and spin them up again arbitrarily, and you probably don't want to have multiple sessions created per id.
Therefore, I would look at the cluster sharding mechanism, possibly in combination with akka-persistence to persist the expiration state of the session.
This will give you a fault tolerant set up with rebalancing when nodes go down or new nodes come up.
The activator template akka cluster sharding scala may be helpful for example code.