Akka DeadLetter monitor not receiving messages sent by unhandled() - akka

I have the following actor setup:
public class Master extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Init.class, init -> {
log.info("Master received an Init, creating DLW and subscribing it.");
ActorRef deadLetterWatcher = context().actorOf(Props.create(DeadLetterWatcher.class),
"DLW");
context().system().eventStream().subscribe(deadLetterWatcher, DeadLetterWatcher.class);
log.info("Master finished initializing.");
})
.matchAny(message -> {
log.info("Found a {} that Master can't handle.",
message.getClass().getName());
unhandled(message);
}).build();
}
}
public class DeadLetterWatcher extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(message -> {
log.info("Got a dead letter!")
}).build();
}
}
At startup the Master actor is created and is sent an Init message, and sure enough, I do see the following log output:
Master received an Init, creating DLW and subscribing it.
Master finished initializing.
However shortly after this, Master is sent a Fizzbuzz message, and I see this in the logs:
Found a com.me.myapp.Fizzbuzz that Master can't handle.
But then I don't see the DeadLetterWatcher log "Got a dead letter!", which tells me I have something wired incorrectly. Any ideas where I'm going awry?

Pass in akka.actor.UnhandledMessage.class, instead of DeadLetterWatcher.class, to the subscribe() method:
context().system().eventStream().subscribe(deadLetterWatcher, akka.actor.UnhandledMessage.class);
Note that unhandled messages are not the same thing as dead letters. For the former, an actor "must provide a pattern match for all messages that it can accept, and if you want to be able to handle unknown messages, then you need to have a default case." Your Master actor handles only Init messages; all other messages that it receives are considered "unhandled" and trigger the publication of an akka.actor.UnhandledMessage to the EventStream. You're explicitly calling the unhandled method for non-Init messages, but unhandled would be called by default if you didn't have the fallback case clause. Also note that you can log unhandled messages via the configuration, without the need of a "monitor" actor:
akka {
actor {
debug {
# enable DEBUG logging of unhandled messages
unhandled = on
}
}
}
Dead letters, on the other hand, are messages that cannot be delivered, such as messages that are sent to a stopped actor, and they also trigger the publication of messages to the EventStream.
Since unhandled messages are different from dead letters, your DeadLetterWatcher is misnamed and should probably be named something like UnhandledMessageWatcher. That being said, if your goal is only to log unhandled messages, then the simplest approach is to do so with the logging configuration mentioned above.

Related

How do I acknowledge / requeue with cloud stream sqs binders

I am writing an application to consume messages from queue. I am able to successfully bind the sqs and receive the messages. However, when I want to requeue the message, I am using as follows.
message.getHeaders().get(AwsHeaders.ACKNOWLEDGMENT, QueueMessageAcknowledgment.class)
.acknowledge();
I also use to requeue
StaticMessageHeaderAccessor.getAcknowledgmentCallback(message).acknowledge(AcknowledgmentCallback.Status.REQUEUE);
But it is not successful.
I also tried PollableMessage but unclear of how to implement it.
https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_overview_2
I've a Consumer like this
public class DefaultChannel implements Channel, Consumer<Message<String>> {
#Override
public void accept(Message<String> message) {
if("success".equals(message.getPayLoad()){
message.getHeaders().get(AwsHeaders.ACKNOWLEDGMENT, QueueMessageAcknowledgment.class)
.acknowledge();
}else{
StaticMessageHeaderAccessor.getAcknowledgmentCallback(message).acknowledge(AcknowledgmentCallback.Status.REQUEUE);
}
}
}
I was able to re-queue succesfully messageDeletionPolicy: ON_SUCCESS properties and throwing Exception from the code.

How to avoid receiving messages multiple times from a ServcieBus Queue when using the WebJobs SDK

I have got a WebJob with the following ServiceBus handler using the WebJobs SDK:
[Singleton("{MessageId}")]
public static async Task HandleMessagesAsync([ServiceBusTrigger("%QueueName%")] BrokeredMessage message, [ServiceBus("%QueueName%")]ICollector<BrokeredMessage> queue, TextWriter logger)
{
using (var scope = Program.Container.BeginLifetimeScope())
{
var handler = scope.Resolve<MessageHandlers>();
logger.WriteLine(AsInvariant($"Handling message with label {message.Label}"));
// To avoid coupling Microsoft.Azure.WebJobs the return type is IEnumerable<T>
var outputMessages = await handler.OnMessageAsync(message).ConfigureAwait(false);
foreach (var outputMessage in outputMessages)
{
queue.Add(outputMessage);
}
}
}
If the prerequisites for the handler aren't fulfilled, outputMessages contains a BrokeredMessage with the same MessageId, Label and payload as the one we are currently handling, but it contains a ScheduledEnqueueTimeUtcin the future.
The idea is that we complete the handling of the current message quickly and wait for a retry by scheduling the new message in the future.
Sometimes, especially when there are more messages in the Queue than the SDK peek-locks, I see messages duplicating in the ServiceBus queue. They have the same MessageId, Label and payload, but a different SequenceNumber, EnqueuedTimeUtc and ScheduledEnqueueTimeUtc. They all have a delivery count of 1.
Looking at my handler code, the only way this can happen is if I received the same message multiple times, figure out that I need to wait and create a new message for handling in the future. The handler finishes successfully, so the original message gets completed.
The initial messages are unique. Also I put the SingletonAttribute on the message handler, so that messages for the same MessageId cannot be consumed by different handlers.
Why are multiple handlers triggered with the same message and how can I prevent that from happening?
I am using the Microsoft.Azure.WebJobs version is v2.1.0
The duration of my handlers are at max 17s and in average 1s. The lock duration is 1m. Still my best theory is that something with the message (re)locking doesn't work, so while I'm processing the handler, the lock gets lost, the message goes back to the queue and gets consumed another time. If both handlers would see that the critical resource is still occupied, they would both enqueue a new message.
After a little bit of experimenting I figured out the root cause and I found a workaround.
If a ServiceBus message is completed, but the peek lock is not abandoned, it will return to the queue in active state after the lock expires.
The ServiceBus QueueClient, apparently, abandons the lock, once it receives the next message (or batch of messages).
So if the QueueClient used by the WebJobs SDK terminates unexpectedly (e.g. because of the process being ended or the Web App being restarted), all messages that have been locked appear back in the Queue, even if they have been completed.
In my handler I am now completing the message manually and also abandoning the lock like this:
public static async Task ProcessQueueMessageAsync([ServiceBusTrigger("%QueueName%")] BrokeredMessage message, [ServiceBus("%QueueName%")]ICollector<BrokeredMessage> queue, TextWriter logger)
{
using (var scope = Program.Container.BeginLifetimeScope())
{
var handler = scope.Resolve<MessageHandlers>();
logger.WriteLine(AsInvariant($"Handling message with label {message.Label}"));
// To avoid coupling Microsoft.Azure.WebJobs the return type is IEnumerable<T>
var outputMessages = await handler.OnMessageAsync(message).ConfigureAwait(false);
foreach (var outputMessage in outputMessages)
{
queue.Add(outputMessage);
}
await message.CompleteAsync().ConfigureAwait(false);
await message.AbandonAsync().ConfigureAwait(false);
}
}
That way I don't get the messages back into the Queue in the reboot scenario.

Akka: Subscriber handling multiple topics

public class Subscriber extends UntypedActor
{
public Subscriber() {
ActorRef mediator =
DistributedPubSub.get(getContext().system()).mediator();
// subscribe to the topic named "content"
mediator.tell(new DistributedPubSubMediator.Subscribe("content", getSelf()),
getSelf());
mediator.tell(new DistributedPubSubMediator.Subscribe("content_2", getSelf()),
getSelf());
}
public void onReceive(Object msg) {
if (msg instanceof String)
System.out.println("Message received: " + msg );
else if (msg instanceof DistributedPubSubMediator.SubscribeAck)
System.out.println("subscribing");
else
unhandled(msg);
}
}
Now suppose both the topics have the same structure name(e.g. foo) but with different types. In this case how the subscriber will get to know "foo" message has been received from which topic ?
So the DistributedPubSub (DPS) is just a means to get messages to the actor. The receive loop doesn't care if the message is sent via the tell, ask, or via a DPS, it just knows that a message is in its inbox. What is more the DPS is just a router in that it calls forward() on the messages it receives so it doesn't rewrite the sender information of whomever published the message to the DPS. So the answer to your question is that you wont know what DPS it came in on and I think if that matters there is probably something wrong with the design. I cant think of a valid reason why that would be significant rather than the sender of the original message or the actual type of the message itself. So I would do my switching and checking on the types and if I needed to know where it came from, from the path of the ActorRef.

How to make sure last AMQP message is published successfully before closing connection?

I have multiple processes working together as a system. One of the processes acts as main process. When the system is shutting down, every process need to send a notification (via RabbitMQ) to the main process and then exit. The program is written in C++ and I am using AMQPCPP library.
The problem is that sometimes the notification is not published successfully. I suspect exiting too soon is the cause of the problem as AMQPCPP library has no chance to send the message out before closing its connection.
The documentation of AMQPCPP says:
Published messages are normally not confirmed by the server, and the RabbitMQ will not send a report back to inform you whether the message was succesfully published or not. Therefore the publish method does not return a Deferred object.
As long as no error is reported via the Channel::onError() method, you can safely assume that your messages were delivered.
This can of course be a problem when you are publishing many messages. If you get an error halfway through there is no way to know for sure how many messages made it to the broker and how many should be republished. If this is important, you can wrap the publish commands inside a transaction. In this case, if an error occurs, the transaction is automatically rolled back by RabbitMQ and none of the messages are actually published.
Without a confirmation from RabbitMQ server, it's hard to decide when it is safe to exit the process. Furthermore, using transaction sounds like overkill for a notification.
Could anyone suggest a simple solution for a graceful shutting down without losing the last notification?
It turns out that I can setup a callback when closing the channel. So that I can safely close connection when all channels are closed successfully. I am not entirely sure if this process ensures all outgoing messages are really published. However from the test result, it seems that the problem is solved.
class MyClass
{
...
AMQP::TcpConnection m_tcpConnection;
AMQP::TcpChannel m_channelA;
AMQP::TcpChannel m_channelB;
...
};
void MyClass::stop(void)
{
sendTerminateNotification();
int remainChannel = 2;
auto closeConnection = [&]() {
--remainChannel;
if (remainChannel == 0) {
// close connection when all channels are closed.
m_tcpConnection.close();
ev::get_default_loop().break_loop();
}
};
auto closeChannel = [&](AMQP::TcpChannel & channel) {
channel.close()
.onSuccess([&](void) { closeConnection(); })
.onError([&](const char * msg)
{
std::cout << "cannot close channel: "
<< msg << std::endl;
// close the connection anyway
closeConnection();
}
);
closeChannel(m_channelA);
closeChannel(m_channelB);
}

How to detect dead remote client or server in akka2

Im new to AKKA2.The following is my question:
There is a server actor and several client actors.
The server stores all the ref of the client actors.
I wonder how the server can detect which client is disconnected(shutdown, crash...)
And if there is a way to tell the clients that the server is dead.
There are two ways to interact with an actor's lifecycle. First, the parent of an actor defines a supervisory policy that handles actor failures and has the option to restart, stop, resume, or escalate after a failure. In addition, a non-supervisor actor can "watch" an actor to detect the Terminated message generated when the actor dies. This section of the docs covers the topic: http://doc.akka.io/docs/akka/2.0.1/general/supervision.html
Here's an example of using watch from a spec. I start an actor, then set up a watcher for the Termination. When the actor gets a PoisonPill message, the event is detected by the watcher:
"be able to watch the proxy actor fail" in {
val myProxy = system.actorOf(Props(new VcdRouterActor(vcdPrivateApiUrl, vcdUser, vcdPass, true, sessionTimeout)), "vcd-router-" + newUuid)
watch(myProxy)
myProxy ! PoisonPill
expectMsg(Terminated(`myProxy`))
}
Here's an example of a custom supervisor strategy that Stops the child actor if it failed due to an authentication exception since that probably will not be correctable, or escalates the failure to a higher supervisor if the failure was for another reason:
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 5, withinTimeRange = 1 minutes) {
// presumably we had a connection, and lost it. Let's restart the child and see if we can re-establish one.
case e: AuthenticationException ⇒
log.error(e.message + " Stopping proxy router for this host")
Stop
// don't know what it was, escalate it.
case e: Exception ⇒
log.warning("Unknown exception from vCD proxy. Escalating a {}", e.getClass.getName)
Escalate
}
Within an actor, you can generate the failure by throwing an exception or handling a PoisonPill message.
Another pattern that may be useful if you don't want to generate a failure is to respond with a failure to the sender. Then you can have a more personal message exchange with the caller. For example, the caller can use the ask pattern and use an onComplete block for handling the response. Caller side:
vcdRouter ? DisableOrg(id) mapTo manifest[VcdHttpResponse] onComplete {
case Left(failure) => log.info("receive a failure message")
case Right(success) ⇒ log.info("org disabled)
}
Callee side:
val org0 = new UUID("00000000-0000-0000-0000-000000000000")
def receive = {
case DisableOrg(id: UUID) if id == org0 => sender ! Failure(new IllegalArgumentException("can't disable org 0")
case DisableOrg(id: UUID) => sender ! disableOrg(id)
}
In order to make your server react to changes of remote client status you could use something like the following (example is for Akka 2.1.4).
In Java
#Override
public void preStart() {
context().system().eventStream().subscribe(getSelf(), RemoteLifeCycleEvent.class);
}
Or in Scala
override def preStart = {
context.system.eventStream.subscribe(listener, classOf[RemoteLifeCycleEvent])
}
If you're only interested when the client is disconnected you could register only for RemoteClientDisconnected
More info here(java)and here(scala)
In the upcoming Akka 2.2 release (RC1 was released yesterday), Death Watch works both locally and remote. If you watch the root guardian on the other system, when you get Terminated for him, you know that the remote system is down.
Hope that helps!