What is the Message broker retry intval and how to configure - wso2

I have used WSO2 Message Broker MB300 server for comiunicate each micro service. That is using Topic connection.
In the dash board "Durable Topic Subscriptions" section and "Number Of Messages Delivery Pending" column message showing as pending. That count getting increase. Any configuration for Message delivery delay or retry interval?

Redelivery delay is introduced from MB 3.2.0 version which can be set as a system property. System.setProperty("AndesRedeliveryDelay", "10000"); Also, maximum redelivery attempts can be set in broker.xml by setting the value as follows, <maximumRedeliveryAttempts>10</maximumRedeliveryAttempts>

Related

AWS IoT Core, messages are not being kept with QoS 1

I set my publish topic/payload to QoS =1, but if I subscribe to that topic 15 min from when i publish, the message isn't there. I check cloudwatch, but there isn't a publish-out.
Is there a way to find if someone/thing is connected to my broker with #?...not sure if that would cause things to disappear without a publish-out though.
if I retain the message with the retain flag, that message can get pulled down, without an issue.
MQTT messages are not queued for new clients (that have never been connected before).
The only way a MQTT broker will queue a message for a client is if they have been previously connected, had a subscription at QOS 1 or 2 and when they reconnect they use the same client id and have the CleanSession flag set to false.

Google pub/sub ERROR com.google.cloud.pubsub.v1.StreamingSubscriberConnection

I have a snowplow enricher application hosted in GKE consuming messages from google pub/sub subscription and the enricher application is throwing the below error.
I can see num_undelivered_messages count spiking(going above 50000) in the pub/sub subscription 3-4 times a day and i presume these error messages are occurring as enricher application is unable to fetch messages from the mentioned subscription.
Why is the application unable to connect to pub/sub subscription at times?
Any help is really appreciated.
Apr 12, 2022 12:30:32 PM com.google.cloud.pubsub.v1.StreamingSubscriberConnection$2 onFailure
WARNING: failed to send operations
com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: 502:Bad Gateway
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1050)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1176)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:969)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:760)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:545)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:515)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
at io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: 502:Bad Gateway
at io.grpc.Status.asRuntimeException(Status.java:533)
... 15 more
The accumulation of messages in the subscriptions suggests that your subscribers are not keeping up with the flow of messages.
To monitor your subscribers, you can create a dashboard that contains backlog metrics: num_undelivered_messages and oldest_unacked_message_age (age of the oldest unacked message in the subscription's backlog) aggregated by resource for all your subscription.
If both the oldest_unacked_message and num_undelivered_messages are growing it is because the subscribers are not keeping up with the message volume.
Solution: Add more subscriber threads/ machines and look for any bugs in your code which might prevent acknowledging messages.
If there is a steady, small backlog size with a steadily growing oldest_unacked_message_age, there may be a small number of messages that cannot be processed. This can be due to the messages getting stuck.
Solution: Check your application logs to understand whether some messages are causing your code to crash. It's unlikely—but possible —that the offending messages are stuck on Pub/Sub rather than in your client.
If the oldest_unacked_message_age exceeds the subscription's message retention duration there are high chances of data loss; in that case the best option is to set up alerts to fire before subscription's message retention duration lapses.

Message stays in GCP Pub/Sub even after acknowledge using GCP REST API

I am using the following GCP Pub/Sub REST APIs for pulling and Acknowledging messages.
For pulling message:-
POST https://pubsub.googleapis.com/v1/projects/myproject/subscriptions/mysubscription:pull
{
"returnImmediately": "false",
"maxMessages": "10"
}
To acknowledge message:-
POST https://pubsub.googleapis.com/v1/projects/myproject/subscriptions/mysubscription:acknowledge
{
"ackIds": [
"dQNNHlAbEGEIBERNK0EPKVgUWQYyODM2LwgRHFEZDDsLRk1SK..."
]
}
I am using the postman tool for calling the above APIs.But I can see the same message with same messageId and a different ackId even after the acknowledgement, when I pull the messages next time.Is there any mechanism available to exclude the acknowledged messages in gcp pull (subscriptions/mysubscription:pull)
Cloud Pub/Sub is an at-least-once delivery system, so some duplicates are expected. However, if you are always seeing duplicates, it is likely that you are not acknowledging the message before the ack deadline passes. The default ack deadline is 10 seconds. If you do not call ack within that time period, then the message will be redelivered. You can set the ack deadline on a subscription to up to 600 seconds.
If all of your messages are expected to take a longer time to process, then it is best to increase the ack deadline. If only a couple of messages will be slow and most will be processed quickly, then it's better to use the modifyAckDeadline call to increase the ack deadline on a per-message basis.

WSO2 ESB scheduled message forwarding processor becomes inactive after reaching max delivery attempt

I tried to follow this link , and I did it step by step for four time, for the first 3 times I used WSO2 MB as a broker, and the last time I tried Apache ActiveMQ but the problem is, when I shut down SimpleQuoteService server and send messages to the proxy via SoapUI , they accumulate in my queue and my scheduled message forwarding processor becomes inactivated after reaching max delivery attempts but WSO2-ESB documentation is sayinq :
"To test the failover scenario, shut down the JMS broker(i.e., the original message store) and send a few messages to the proxy service.
You will see that the messages are not sent to the back-end since the original message store is not available. You will also see that the messages are stored in the failover message store."
Anyone to explain?!!!
You can disable deactivating the message processor setting "max.delivery.drop" parameter to 'Enabled'. It will drop the message after max delivery attempts without deactivating the processor. See here for docs(definitions) of these parameters.

RabbitMQ Visibility Timeout

Do RabbitMQ queues have a AWS SQS-like - "message visibility timeout" ?
From the AWS SQS documentation :
"The visibility timeout clock starts ticking once Amazon SQS returns the message. During that time, the component processes and deletes the message. But what happens if the component fails before deleting the message? If your system doesn't call DeleteMessage for that message before the visibility timeout expires, the message again becomes visible to the ReceiveMessage calls placed by the components in your system and it will be received again"
Thanks!
I believe you are looking for the RabbitMQ manual acknowledgment feature. This feature allows you get messages from the queue and once you have receive them ack'ed them. If something happens in the middle of this process, the message will be available again in the queue after a certain amount of time. Also, in the meantime since you get the message until you ack it, the message is not available for other consumers to consume.
I think this is the same behavior as Message Visibility Timeout of SQS.
There aren't any message timeouts; RabbitMQ will redeliver the message only when the worker connection dies. It's fine even if processing a message takes a very, very long time.There aren't any message timeouts; RabbitMQ will redeliver the message only when the worker connection dies. It's fine even if processing a message takes a very, very long time.
I believe the answer can be found # a discussion of MQ vs SQS generally this is a considered a feature of MQ (that it can handle slow consumers) but using a destination policy of "slowConsumerStrategy" with "abortSlowConsumerStrategy" might solve your problem. A fuller explanation can be found at redhat's MQ documentation and i supposed we have to hope that rabbitMQ and AmazonMQ both support that strategy.