How to detect dead remote client or server in akka2 - akka

Im new to AKKA2.The following is my question:
There is a server actor and several client actors.
The server stores all the ref of the client actors.
I wonder how the server can detect which client is disconnected(shutdown, crash...)
And if there is a way to tell the clients that the server is dead.

There are two ways to interact with an actor's lifecycle. First, the parent of an actor defines a supervisory policy that handles actor failures and has the option to restart, stop, resume, or escalate after a failure. In addition, a non-supervisor actor can "watch" an actor to detect the Terminated message generated when the actor dies. This section of the docs covers the topic: http://doc.akka.io/docs/akka/2.0.1/general/supervision.html
Here's an example of using watch from a spec. I start an actor, then set up a watcher for the Termination. When the actor gets a PoisonPill message, the event is detected by the watcher:
"be able to watch the proxy actor fail" in {
val myProxy = system.actorOf(Props(new VcdRouterActor(vcdPrivateApiUrl, vcdUser, vcdPass, true, sessionTimeout)), "vcd-router-" + newUuid)
watch(myProxy)
myProxy ! PoisonPill
expectMsg(Terminated(`myProxy`))
}
Here's an example of a custom supervisor strategy that Stops the child actor if it failed due to an authentication exception since that probably will not be correctable, or escalates the failure to a higher supervisor if the failure was for another reason:
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 5, withinTimeRange = 1 minutes) {
// presumably we had a connection, and lost it. Let's restart the child and see if we can re-establish one.
case e: AuthenticationException ⇒
log.error(e.message + " Stopping proxy router for this host")
Stop
// don't know what it was, escalate it.
case e: Exception ⇒
log.warning("Unknown exception from vCD proxy. Escalating a {}", e.getClass.getName)
Escalate
}
Within an actor, you can generate the failure by throwing an exception or handling a PoisonPill message.
Another pattern that may be useful if you don't want to generate a failure is to respond with a failure to the sender. Then you can have a more personal message exchange with the caller. For example, the caller can use the ask pattern and use an onComplete block for handling the response. Caller side:
vcdRouter ? DisableOrg(id) mapTo manifest[VcdHttpResponse] onComplete {
case Left(failure) => log.info("receive a failure message")
case Right(success) ⇒ log.info("org disabled)
}
Callee side:
val org0 = new UUID("00000000-0000-0000-0000-000000000000")
def receive = {
case DisableOrg(id: UUID) if id == org0 => sender ! Failure(new IllegalArgumentException("can't disable org 0")
case DisableOrg(id: UUID) => sender ! disableOrg(id)
}

In order to make your server react to changes of remote client status you could use something like the following (example is for Akka 2.1.4).
In Java
#Override
public void preStart() {
context().system().eventStream().subscribe(getSelf(), RemoteLifeCycleEvent.class);
}
Or in Scala
override def preStart = {
context.system.eventStream.subscribe(listener, classOf[RemoteLifeCycleEvent])
}
If you're only interested when the client is disconnected you could register only for RemoteClientDisconnected
More info here(java)and here(scala)

In the upcoming Akka 2.2 release (RC1 was released yesterday), Death Watch works both locally and remote. If you watch the root guardian on the other system, when you get Terminated for him, you know that the remote system is down.
Hope that helps!

Related

Processing Dropped Message In Akka Streams

I have the following source queue definition.
lazy val (processMessageSource, processMessageQueueFuture) =
peekMatValue(
Source
.queue[(ProcessMessageInputData, Promise[ProcessMessageOutputData])](5, OverflowStrategy.dropNew))
def peekMatValue[T, M](src: Source[T, M]): (Source[T, M], Future[M]) {
val p = Promise[M]
val s = src.mapMaterializedValue { m =>
p.trySuccess(m)
m
}
(s, p.future)
}
The Process Message Input Data Class is essentially an artifact that is created when a caller calls a web server endpoint, which is hooked upto this stream (i.e. the service endpoint's business logic puts messages into this queue). The Promise of process message out is something that is completed downstream in the sink of the application, and the web server then has an on complete callback on this future to return the response back.
There are also other sources of ingress into this stream.
Now the buffer may be backed up since the other source may overload the system, thereby triggering stream back pressure. The existing code just drops the new message. But I still want to complete the process message output promise to complete with an exception stating something like "Throttled".
Is there a mechanism to write a custom overflow strategy, or a post processing on the overflowed element that allows me to do this?
According to https://github.com/akka/akka/blob/master/akkastream/src/main/scala/akka/stream/impl/QueueSource.scala#L83
dropNew would work just fine. On clients end it would look like.
processMessageQueue.offer(in, pr).foreach { res =>
res match {
case Enqueued => // Code to handle case when successfully enqueued.
case Dropped => // Code to handle messages that are dropped since the buffier was overflowing.
}
}

Akka DeadLetter monitor not receiving messages sent by unhandled()

I have the following actor setup:
public class Master extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Init.class, init -> {
log.info("Master received an Init, creating DLW and subscribing it.");
ActorRef deadLetterWatcher = context().actorOf(Props.create(DeadLetterWatcher.class),
"DLW");
context().system().eventStream().subscribe(deadLetterWatcher, DeadLetterWatcher.class);
log.info("Master finished initializing.");
})
.matchAny(message -> {
log.info("Found a {} that Master can't handle.",
message.getClass().getName());
unhandled(message);
}).build();
}
}
public class DeadLetterWatcher extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(message -> {
log.info("Got a dead letter!")
}).build();
}
}
At startup the Master actor is created and is sent an Init message, and sure enough, I do see the following log output:
Master received an Init, creating DLW and subscribing it.
Master finished initializing.
However shortly after this, Master is sent a Fizzbuzz message, and I see this in the logs:
Found a com.me.myapp.Fizzbuzz that Master can't handle.
But then I don't see the DeadLetterWatcher log "Got a dead letter!", which tells me I have something wired incorrectly. Any ideas where I'm going awry?
Pass in akka.actor.UnhandledMessage.class, instead of DeadLetterWatcher.class, to the subscribe() method:
context().system().eventStream().subscribe(deadLetterWatcher, akka.actor.UnhandledMessage.class);
Note that unhandled messages are not the same thing as dead letters. For the former, an actor "must provide a pattern match for all messages that it can accept, and if you want to be able to handle unknown messages, then you need to have a default case." Your Master actor handles only Init messages; all other messages that it receives are considered "unhandled" and trigger the publication of an akka.actor.UnhandledMessage to the EventStream. You're explicitly calling the unhandled method for non-Init messages, but unhandled would be called by default if you didn't have the fallback case clause. Also note that you can log unhandled messages via the configuration, without the need of a "monitor" actor:
akka {
actor {
debug {
# enable DEBUG logging of unhandled messages
unhandled = on
}
}
}
Dead letters, on the other hand, are messages that cannot be delivered, such as messages that are sent to a stopped actor, and they also trigger the publication of messages to the EventStream.
Since unhandled messages are different from dead letters, your DeadLetterWatcher is misnamed and should probably be named something like UnhandledMessageWatcher. That being said, if your goal is only to log unhandled messages, then the simplest approach is to do so with the logging configuration mentioned above.

Akka TestProbe to test context.watch() / Terminated handling

I'm testing an akka system using TestKit . One actor of the system I'm testing, upon receiving a certain message type, context.watches the sender, and kills itself when the sender dies:
trait Handler extends Actor {
override def receive: Receive = {
case Init => context.watch(sender)
case Terminated => context.stop(self)
}
}
In my test I'm sending
val probe = TestProbe(system)
val target = TestActorRef(Props(classOf[Handler]))
probe.send(target, Init)
Now, to test the watch / Terminated behavior - I want to simulate the testprobe being killed.
I can do
probe.send(target, Terminated)
But, this presupposes that target has called context.watch(sender) , else it would not receive a Terminated.
I can do
probe.testActor ! Kill
with doesn't send Terminated unless target has correctly called context.watch(sender) , but I don't actually want the testprobe killed, as it needs to remain responsive to test if (for example) target continues to send messages instead of stopping itself .
I'm come across this a few times now, what's the correct way to test if an actor is handling the above situation correctly?
You could watch the actor under test for termination with a separate probe instead of trying to do that via the 'sender' probe:
val probe = TestProbe(system)
val deathWatcher = TestProbe(system)
val target = TestActorRef(Props(classOf[Handler]))
deathWatcher.watch(target)
probe.send(target, Init)
// TODO make sure the message is processed.. perhaps ack it?
probe ! Kill
deathWatcher.expectTerminated(target)

Why can't message be delivered to Akka actor?

I have one actor sending a message to another actor. It successfully does so multiple times, but after a few messages, the second actor stops processing the messages. The system itself isn't very loaded.
The test that reproduces the problem is:
test("case2: Primary (in isolation) should react properly to Insert, Remove, Get") {
val arbiter = TestProbe()
val primary = system.actorOf(Replica.props(arbiter.ref, Persistence.props(flaky = false)), "case2-primary")
val client = session(primary)
arbiter.expectMsg(Join)
arbiter.send(primary, JoinedPrimary)
client.getAndVerify("k1")
client.setAcked("k1", "v1")
client.getAndVerify("k1")
client.getAndVerify("k2")
client.setAcked("k2", "v2") // assertion failure happens here
client.getAndVerify("k2")
client.removeAcked("k1")
client.getAndVerify("k1")
}
Since this is part of a Coursera course, I'd rather not post my implementation.
What kinds of things might cause this failure?

Replying to remote client via reply only?

According to the akka actor documentation one can reply using self.channel ! Message so the code will work locally. I would like to do the same with remote actors.
I have:
class ServerActor extends Actor {
def receive = {
case "Hello" =>
self.channel ! "World"
}
}
and
class ClientActor extends Actor {
val remote = ...
def receive = {
case "Start" =>
remote ! "Hello"
case "World" =>
println("World received")
}
}
This works in so far as the ServerActor receives the "Hello" and sends a "World" message to a ClientActor. Unfortunately, it seems that the ClientActor receiving the message is one that is created in the servers VM, not the one that actually sent it (in the client VM).
Is there a way to make this work?
PS: It works when I do a self reply "World" and remote ? "Hello", however, I would rather send a message than replying.
EDIT: Thanks to everyone. Starting remoting on both ends was the problem. Others finding this question beware:
When using letting clients receive their responses in a non-blocking manner (as in not using remote ? request), shutting them down immediately on receiving a shutdown message, will cause some strange behavior (mentioned in my comments below); possibly by design due (to akka's let-it-fail fault-tolerance?). As clients are not waiting for a response shutting them down immediately on receiving a shutdown message will result in the following (on akka-1.2): Since the "original clients" no longer exist (but the round-trip "is still in progress") they are restarted --- strangely --- both on the client and the server.
I think it is the same problem I had. You need to start up a server instance on the client aswell when you want receive messages from the server.
The exception is when you're explicitly are asking for a result with question-mark operator.