How do I acknowledge / requeue with cloud stream sqs binders - amazon-web-services

I am writing an application to consume messages from queue. I am able to successfully bind the sqs and receive the messages. However, when I want to requeue the message, I am using as follows.
message.getHeaders().get(AwsHeaders.ACKNOWLEDGMENT, QueueMessageAcknowledgment.class)
.acknowledge();
I also use to requeue
StaticMessageHeaderAccessor.getAcknowledgmentCallback(message).acknowledge(AcknowledgmentCallback.Status.REQUEUE);
But it is not successful.
I also tried PollableMessage but unclear of how to implement it.
https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_overview_2
I've a Consumer like this
public class DefaultChannel implements Channel, Consumer<Message<String>> {
#Override
public void accept(Message<String> message) {
if("success".equals(message.getPayLoad()){
message.getHeaders().get(AwsHeaders.ACKNOWLEDGMENT, QueueMessageAcknowledgment.class)
.acknowledge();
}else{
StaticMessageHeaderAccessor.getAcknowledgmentCallback(message).acknowledge(AcknowledgmentCallback.Status.REQUEUE);
}
}
}

I was able to re-queue succesfully messageDeletionPolicy: ON_SUCCESS properties and throwing Exception from the code.

Related

AWS Lambda - ensure complete processing - data persistence, event publishing etc

Say I have some code like this in a lambda function:
public void handleRequest(final S3Event s3Event) {
//stuff removed for brevity
fooRepository.add(foo);
eventPublisher.publish(someEvent1);
eventPublisher.publish(someEvent2);
}
Where the eventPublisher is a wrapper for an sqsClient which send messages to an sqs queue.
Is there a pattern/approach that can be used to ensure that if one command fails the whole lot gets aborted/reverted? (fails and succeeds as one unit)
e.g. in the above, say the publishing of someEvent1 fails an exception might get thrown causing the event to be reprocessed by the lambda - this could then result in duplicate data being added to the db via the call again to fooRepository.add(foo)...

How to receive messages from a streaming server?

I am trying to build an asynchronous gRPC C++ client that sends/receives streaming messages to/from server using the ClientAsyncReaderWriter instance. The client and the server send messages to each other whenever they want. How can I check if there is any message from the server?
The ClientAsyncReaderWriter instance has a binded completion queue. I tried to check the completion queue by calling Next() and AsyncNext() functions to see if there is any event that would indicate that there is some message from the server. However, the completion queue has no events even if there is a message from the server.
class AsyncClient {
public:
AsyncClient(std::shared_ptr<grpc::Channel> channel) :
stub_(MyService::NewStub(channel)),
stream(stub_->AsyncStreamingRPC(&context, &cq, (void *)1))
{}
~AsyncClient()
{}
void receiveFromServer() {
StreamingResponse response;
// 1. Check if there is any message
// 2. Read the message
}
private:
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::shared_ptr<MyService::Stub> stub_;
std::shared_ptr<grpc::ClientAsyncReaderWriter<StreamingRequest, StreamingResponse>> stream;
};
I need to implement steps 1 and 2 in the receiveFromServer() function.
Fortunately, I found a solution to my problem and the asynchronous bi-directional streaming in my project works as expected now.
It turned out that my understanding of the "completion queue" concept was incorrect.
This example was a great help for me!

Akka DeadLetter monitor not receiving messages sent by unhandled()

I have the following actor setup:
public class Master extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Init.class, init -> {
log.info("Master received an Init, creating DLW and subscribing it.");
ActorRef deadLetterWatcher = context().actorOf(Props.create(DeadLetterWatcher.class),
"DLW");
context().system().eventStream().subscribe(deadLetterWatcher, DeadLetterWatcher.class);
log.info("Master finished initializing.");
})
.matchAny(message -> {
log.info("Found a {} that Master can't handle.",
message.getClass().getName());
unhandled(message);
}).build();
}
}
public class DeadLetterWatcher extends AbstractActor {
protected Logger log = LoggerFactory.getLogger(this.getClass());
#Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(message -> {
log.info("Got a dead letter!")
}).build();
}
}
At startup the Master actor is created and is sent an Init message, and sure enough, I do see the following log output:
Master received an Init, creating DLW and subscribing it.
Master finished initializing.
However shortly after this, Master is sent a Fizzbuzz message, and I see this in the logs:
Found a com.me.myapp.Fizzbuzz that Master can't handle.
But then I don't see the DeadLetterWatcher log "Got a dead letter!", which tells me I have something wired incorrectly. Any ideas where I'm going awry?
Pass in akka.actor.UnhandledMessage.class, instead of DeadLetterWatcher.class, to the subscribe() method:
context().system().eventStream().subscribe(deadLetterWatcher, akka.actor.UnhandledMessage.class);
Note that unhandled messages are not the same thing as dead letters. For the former, an actor "must provide a pattern match for all messages that it can accept, and if you want to be able to handle unknown messages, then you need to have a default case." Your Master actor handles only Init messages; all other messages that it receives are considered "unhandled" and trigger the publication of an akka.actor.UnhandledMessage to the EventStream. You're explicitly calling the unhandled method for non-Init messages, but unhandled would be called by default if you didn't have the fallback case clause. Also note that you can log unhandled messages via the configuration, without the need of a "monitor" actor:
akka {
actor {
debug {
# enable DEBUG logging of unhandled messages
unhandled = on
}
}
}
Dead letters, on the other hand, are messages that cannot be delivered, such as messages that are sent to a stopped actor, and they also trigger the publication of messages to the EventStream.
Since unhandled messages are different from dead letters, your DeadLetterWatcher is misnamed and should probably be named something like UnhandledMessageWatcher. That being said, if your goal is only to log unhandled messages, then the simplest approach is to do so with the logging configuration mentioned above.

How to avoid receiving messages multiple times from a ServcieBus Queue when using the WebJobs SDK

I have got a WebJob with the following ServiceBus handler using the WebJobs SDK:
[Singleton("{MessageId}")]
public static async Task HandleMessagesAsync([ServiceBusTrigger("%QueueName%")] BrokeredMessage message, [ServiceBus("%QueueName%")]ICollector<BrokeredMessage> queue, TextWriter logger)
{
using (var scope = Program.Container.BeginLifetimeScope())
{
var handler = scope.Resolve<MessageHandlers>();
logger.WriteLine(AsInvariant($"Handling message with label {message.Label}"));
// To avoid coupling Microsoft.Azure.WebJobs the return type is IEnumerable<T>
var outputMessages = await handler.OnMessageAsync(message).ConfigureAwait(false);
foreach (var outputMessage in outputMessages)
{
queue.Add(outputMessage);
}
}
}
If the prerequisites for the handler aren't fulfilled, outputMessages contains a BrokeredMessage with the same MessageId, Label and payload as the one we are currently handling, but it contains a ScheduledEnqueueTimeUtcin the future.
The idea is that we complete the handling of the current message quickly and wait for a retry by scheduling the new message in the future.
Sometimes, especially when there are more messages in the Queue than the SDK peek-locks, I see messages duplicating in the ServiceBus queue. They have the same MessageId, Label and payload, but a different SequenceNumber, EnqueuedTimeUtc and ScheduledEnqueueTimeUtc. They all have a delivery count of 1.
Looking at my handler code, the only way this can happen is if I received the same message multiple times, figure out that I need to wait and create a new message for handling in the future. The handler finishes successfully, so the original message gets completed.
The initial messages are unique. Also I put the SingletonAttribute on the message handler, so that messages for the same MessageId cannot be consumed by different handlers.
Why are multiple handlers triggered with the same message and how can I prevent that from happening?
I am using the Microsoft.Azure.WebJobs version is v2.1.0
The duration of my handlers are at max 17s and in average 1s. The lock duration is 1m. Still my best theory is that something with the message (re)locking doesn't work, so while I'm processing the handler, the lock gets lost, the message goes back to the queue and gets consumed another time. If both handlers would see that the critical resource is still occupied, they would both enqueue a new message.
After a little bit of experimenting I figured out the root cause and I found a workaround.
If a ServiceBus message is completed, but the peek lock is not abandoned, it will return to the queue in active state after the lock expires.
The ServiceBus QueueClient, apparently, abandons the lock, once it receives the next message (or batch of messages).
So if the QueueClient used by the WebJobs SDK terminates unexpectedly (e.g. because of the process being ended or the Web App being restarted), all messages that have been locked appear back in the Queue, even if they have been completed.
In my handler I am now completing the message manually and also abandoning the lock like this:
public static async Task ProcessQueueMessageAsync([ServiceBusTrigger("%QueueName%")] BrokeredMessage message, [ServiceBus("%QueueName%")]ICollector<BrokeredMessage> queue, TextWriter logger)
{
using (var scope = Program.Container.BeginLifetimeScope())
{
var handler = scope.Resolve<MessageHandlers>();
logger.WriteLine(AsInvariant($"Handling message with label {message.Label}"));
// To avoid coupling Microsoft.Azure.WebJobs the return type is IEnumerable<T>
var outputMessages = await handler.OnMessageAsync(message).ConfigureAwait(false);
foreach (var outputMessage in outputMessages)
{
queue.Add(outputMessage);
}
await message.CompleteAsync().ConfigureAwait(false);
await message.AbandonAsync().ConfigureAwait(false);
}
}
That way I don't get the messages back into the Queue in the reboot scenario.

ActiveMQ-cpp Broker URI with PrefetchPolicy has no effect

I am using activemq-cpp 3.7.0 with VS 2010 to build a client, the server is ActiveMQ 5.8. I have created a message consumer using code similar to the following, based on the CMS configurations mentioned here. ConnClass is a ExceptionListener and a MessageListener. I only want to consume a single message before calling cms::Session::commit().
void ConnClass::setup()
{
// Create a ConnectionFactory
std::tr1::shared_ptr<ConnectionFactory> connectionFactory(
ConnectionFactory::createCMSConnectionFactory(
"tcp://localhost:61616?cms.PrefetchPolicy.queuePrefetch=1");
// Create a Connection
m_connection = std::tr1::shared_ptr<cms::Connection>(
connectionFactory->createConnection());
m_connection->start();
m_connection->setExceptionListener(this);
// Create a Session
m_session = std::tr1::shared_ptr<cms::Session>(
m_connection->createSession(Session::SESSION_TRANSACTED));
// Create the destination (Queue)
m_destination = std::tr1::shared_ptr<cms::Destination>(
m_session->createQueue("myqueue?consumer.prefetchSize=1"));
// Create a MessageConsumer from the Session to the Queue
m_consumer = std::tr1::shared_ptr<cms::MessageConsumer>(
m_session->createConsumer( m_destination.get() ));
m_consumer->setMessageListener( this );
}
void ConnClass::onMessage( const Message* message )
{
// read message code ...
// schedule a processing event for
// another thread that calls m_session->commit() when done
}
The problem is I am receiving multiple messages instead of one message before calling m_session->commit() -- I know this because the commit() call is triggered by user input. How can I ensure onMessage() is only called once before each call to commit()?
It doesn't work that way. When using async consumers the messages are delivered as fast as the onMessage method completes. If you want to consume one and only one message then use a sync receive call.
For an async consumer the prefetch allows the broker to buffer up work on the client instead of firing one at a time so you can generally get better proformance, in your case as the async onMessage call completes an ack is sent back to the broker an the next message is sent to the client.
Yes, I find this too. However, when I use the Destination URI option ( "consumer.prefetchSize=15" , http://activemq.apache.org/cms/configuring.html#Configuring-DestinationURIParameters ) for the asynchronous consumer, It works well.
BTW, I just use the latest ActiveMQ-CPP v3.9.4 by Tim , and ActiveMQ v5.12.1 on CentOS 7.
Thanks!