Java/Akka (v2.3.9) here. Each of my Akka UntypedActor subclasses has the ability to respond to several "generic" messages, such as ExecuteOrder66:
// Groovy pseudo-code
class StormTrooper extends UntypedActor {
#Override
void onReceive(Object message) throws Exception {
if(message instanceof ExecuteOrder66) {
// Betray the Jedi, serve only the emperor.
}
}
}
Let's say I have 100 different actor subclasses that each support ExecuteOrder66. I need a way to broadcast instances of this message to every single one of my actors; so like a public broadcast announce which everybody gets.
I think that link to the Akka docs above gets me close, but I'm not seeing one that sends an ExecuteOrder66 to every single one of my actors. Any ideas?
The problem is that it is not quite clear who "everybody" is. What if some actor a gets a handshake message from some other actor b from remote actor system, stores b's reference, exchanges a few messages, then fails and restarts without the reference of b? Is b part of "everybody"? Who is responsible for finding the actor b again? How is one even supposed to know that b is still alive?
However, if you have a single specific actor system, a path selection with wildcards could do what you want. Something like this might do the job:
mySystem.actorSelection("akka://mySystemName/**")
This actor selection can then be used to tell (!) your broadcasted message to every actor on the system. You might also consider to be a little more restrictive, and select only the actors under /user, without touching the system actors.
Disclaimer: a little ad-hoc actor system I've just set up in the REPL doesn't complain about the path as indicated above, but I did not test it thoroughly. A runnable toy-example might be helpful.
Related
By the definition of CQRS command can/should be validated and at the end even declined (if validation does not pass). As a part of my command validation I check if state transition is really needed. So let take a simple, dummy example: actor is in state A. A command is send to actor to transit to state B. The command gets validated and at the end event is generated StateBUpdated. Then the exact same command is send to transit to state B. Again command gets validated and during the validation it is decided that no event will be generated (since we are already in state B) and just respond back that command was processed and everything is ok. It is kind of idempotency thingy.
Nevertheless, I have hard time (unit) testing this. Usual unit test for persistent actor looks like sending a command to the actor and then restarting actor and check that state is persisted. I want to test if I send a command to the actor to check how many events were generated. How to do that?
Thanks
We faced this problem while developing our internal CQRS framework based on akka persistence. Our solution was to use Persistence Query(https://doc.akka.io/docs/akka/2.5/scala/persistence-query.html). In case you haven't used it, it is a query interface that journal plugins can optionally implement, and can be used as the read side in a CQRS system.
For your testing purposes, the method would be eventsByPersistenceId, which will give you an akka streams Source with all the events persisted by an actor. The source can be folded into a list of events like:
public CompletableFuture<List<Message<?>>> getEventsForIdAsync(String id, FiniteDuration readTimeout) {
return ((EventsByPersistenceIdQuery)readJournal).eventsByPersistenceId(id, 0L, Long.MAX_VALUE)
.takeWithin(readTimeout)
.map(eventEnvelope -> (Message<?>)eventEnvelope.event())
.<List<Message<?>>>runFold(
new ArrayList<Message<?>>(),
(list, event) -> {
list.add(event);
return list;
}, materializer)
.toCompletableFuture();
}
Sorry if the above seems bloated, we use Java, so if you are used to Scala it is indeed ugly. Getting the readJournal is as easy as:
ReadJournal readJournal = PersistenceQuery.lookup().get(actorSystem)
.getReadJournalFor(InMemoryReadJournal.class, InMemoryReadJournal.Identifier())
You can see that we use the akka.persistence.inmemory plugin since it is the best for testing, but any plugin which implements the Persistence Query API would work.
We actually made a BDD-like test API inside our framework, so a typical test looks like this:
fixture
.given("ID1", event(new AccountCreated("ID1", "John Smith")))
.when(command(new AddAmount("ID1", 2.0)))
.then("ID1", eventApplied(new AmountAdded("ID1", 2.0)))
.test();
As you see, we also handle the case of setting up previous events in the given clause as well a potentially dealing with multiple persistenceIds(we use ClusterSharding).
From you description it sounds like you need either to mock your persistence, or at lest be able to access it's state easily. I was able to find two projects that will do that:
akka-persistence-mock which is designed for use in testing, but not actively developed.
akka-persistence-inmemory
which is very useful when testing persistent actors, persistent FSM and akka cluster.
I would recommend the latter, since it provides the possibility of retrieving all messages from the journal.
I am trying to process an event stream which can be "sessionized" into sessions. The plan is to use a pool of actors, where a single actor from the pool would process all events from one session (the reason is I need to maintain some session state). It seems to me that in order for me to achieve this, I would have to keep the ActorRef around for a particular actor which got assigned to a particular session. However, if I am using an actor pool by using:
val randomActor = _system.actorOf(Props[SessionProcessorActor].withRouter(RandomPool(100)), name = "RandomPoolActor")
Then, in this case, the randomActor provides ActorRef to the whole pool, not to the individual actors in the pool. How could I then achieve what I mentioned above?
One way I can think of is to send back the reference after the actor from the pool has been initialized (would probably look something like RandomPoolActor$ab etc.). This method however has a few problems, one of which is I have to use an ask pattern instead of tell, so that I don't miss an event from the same session.
Any other way to achieve this? Any other pattern to adopt?
You could use a ConsistentHashingPool which does something similar to what you are looking for. A ConsistentHashingRouter ensures that every message ends in the same actor based on a hashKey. This key would be your sessionId in your scenario. There is no need to keep ActorRefs or other references to accomplish this.
There are multiple ways of defining your hashKey in your code. I would recommend creating a case class that extends ConsistentHashable. Once done you will be required to implement the method consistentHashKey. Example:
case class HashableEnvelope(yourMsgClass: YourMsgClass) extends ConsistentHashable {
override def consistentHashKey = yourMsgClass.sessionId
}
Then you can define your pool like this:
val pool = system.actorOf(Props[SessionProcessorActor].withRouter(ConsistentHashingPool(100)))
Another thing to mention is that the router will ensure that all messages with the same hashKey will end up in the same actor, however, it does not ensure that a particular actor receives only messages for a given hashKey. It can receive for multiple hashKeys. That should not be a problem, just your SessionProcessorActor should be able to process a few hashKeys instead of just one.
The consistent hashing algorithm will decide which message go to each actor. You can read on wikipedia how it works: https://en.wikipedia.org/wiki/Consistent_hashing. To distribute messages in a more evenly manner you should increase the number of virtual nodes in the configuration (default is 10):
akka.actor.deployment.default.virtual-nodes-factor = 1000
Depending on how many sessionIds and actors you have, you will see that message are getting distributed more evenly.
I have written some actor classes and I find that I have to get a handle into the lifecycle of these entities. For example whenever my actor is initialized I would like a method to be called so that I can setup some listeners on message queues (or open db connections etc).
Is there an equivalent of this? The equivalent I can think of is Spring's InitialisingBean and DisposableBean
This is a typical scenario where you would override methods like preStart(), postStop(), etc. I don't see anything wrong with this.
Of course you have to be aware of the details - for example postStop() is called asynchronously after actor.stop() is invoked while preStart() is called when an Actor is started. This means that potentially slow/blocking things like DB interaction should be kept to a minimum.
You can also use the Actor's constructor for initialization of data.
As Matthew mentioned, supervision plays a big part in Akka - so you can instruct the supervisor to perform specific stuff on events. For example the so-called DeathWatch - you can be notified if one of the actors "you are watching upon" dies:
context.watch(child)
...
def receive = {
case Terminated(`child`) => lastSender ! "finished"
}
An Actor is basically two methods -- a constructor, and onMessage(Object): void.
There's nothing in its lifecycle that naturally provides for "wiring" behavior, which leaves you with a few options.
Use a Supervisor actor to create your other actors. A Supervisor is responsible for watching, starting and restarting Actors on failure -- and therefore it is often valuable to have a Supervisor that understands the state of integrated systems to avoid continously restarting. This Supervisor would create and manage Service objects (possibly via Spring) and pass them to Actors.
Use your preferred Initialization technique at the time of Actor construction. It's tricky but you can certainly combine Spring with Actors. Just be aware that should a Supervisor restart your actor, you'll need to be able to resurrect its desired state from whatever content you placed in the Props object you used to start it in the first place.
Wire everything on-demand. Open connections on demand when an Actor starts (and cache them as necessary). I find I do this fairly often -- and I let the Actor fail when its connections no longer work. The supervisor will restart the Actor, which will recreate all connections.
Something important things to remember:
The intent of Actor model is that Actors don't run continuously -- they only run when there are messages provided to them. If you add a message listener to an Actor, you are essentially adding new threads that can access that actor. This can be a problem if you use supervision -- a restarted actor may leak that thread and this may in turn cause the actor not to be garbage collected. It can also be a problem because it introduces a race condition, and part of the value of actors is avoiding that.
An Actor that does I/O is, from the perspective of the actor system, blocking. If you have too many Actors doing I/O at the same time, you will exhaust your Dispatcher's thread pool and lock up the system.
A given Actor instance can operate on many different threads over its lifetime, but will only operate on one thread at a time. This can be confusing to some messaging systems -- for example, JMS' Spec asserts that a Session not be used on multiple threads, and many JMS interpret this as "can only run on the thread on which it was started." You may see warnings, or even exceptions, resulting from this.
For these reasons, I prefer to use non-actor code to do some of my I/O. For example, I'll have an incoming message listener object whose responsibility is to take JMS messages off a queue, use them to create POJO messages, and send tells to the Actor system. Alternately, I'll use an Actor, but place that actor on a custom Dispatcher that has thread pinning enabled. This assures that that Actor will only run on a specific thread and won't block up the system that other non-I/O actors are using.
I have a test for particular actor. This actor depends on some other actors, so I use TestProbe() to test in isolation.
My problem is, that I receive more messages then I am interested in testing at this very particular test. For example:
val a = TestProbe()
val b = TestProbe()
val actor = TestActorRef(new MyActor(a.ref, b.ref))
actor ! Message(1, 2)
b.expectMsg(3)
The test fails, because while creating MyActor it sends some kind of "registration" message to ones passed in constructor.
The message 3 arrives eventually, but assertion fails - this is not the first message to arrive.
I would like to avoid asserting more messages than I need for a test - those can change, etc, it is not the scope of a particular test anyway.
As the TestProbe does not contain such methods - I suspect there may be something wrong with my test setup (or rather my project architecture then). I see there are many methods like fishForMessage but all those require a explicit time parameters which seems like irrelevant as my whole test is purely synchronous.
Is there any way to accomplish such test is desired message is just among all the were received? If not, how can my setup can be improved to be easy testable?
The fishForMessage actually fits. All these assertions including expectMsg are asynchronous. expectMsg just uses the preconfigured timeFactor as timeout.
TestActorRef guarantees you only that CallingThreadDispatcher will be used to send messages and execute futures (if they use dispatcher from the test actor), so they will act sequentially til they're use context.dispatcher. Nothing stops some code inside your MyActor from using another dispatcher to send a response, so all checks still should be asynchronous - you just can't get rid of that.
I have a design challenge in regards to a new akka application that I'm building.
The issue/challenge is:
On the client side I have made a simple actor which sends a request, and then using become() in order to wait for a proper server answer, of course also including a timeout message, in case I don't get an answer in proper time.
The interesting thing is however on the server side.
Here i have the following construction:
Actor A (Configured as a round robin router) this router is receiving all request from the client.
Actor A then forwards message to Actor A1, A2...Ax which all have been created in the context of actor A meaning it's the supervisor of them.
In the normal case the Actor Ax would be able to just reply to the sender, since the message is forwarded, however...
In case of an error I would like to besides log it on the server log, also to give the user som kind of information on the error that has happened.
In the perfect world I would prefer to be able to somehow in the supervisor strategy to just say something like getErrorActorsLastSender() and then get the ActorRef of the client which caused the issue. The reason why I prefer this, is that then I would have one place to have all the error handling, and all unforseen exceptions would always be handled at least in some generic way.
The alternative is to override the prerestart() method on each child actor, and then make the supervisor strategy to restart the actor when an exception is thrown. However this would require me to implement this method for x child actors.
Any good suggestions, if this is possible from supervisor strategy?
Thanks in advance.
Have you tried creating your own supervisor strategy, for example by extending OneForOneStrategy? There is a method called handleFailure which takes (among others) the child and the cause of the failure. Also you will get the ActorContext, which gives you the sender of the message that caused the error, I think you should be able to do what you want when you override this method.
One way to achieve your goals is to encapsulate the sender in the Exception. If you throw the Exception yourself, this should be straightforward.