WSO2 ESB scheduled message forwarding processor becomes inactive after reaching max delivery attempt - wso2

I tried to follow this link , and I did it step by step for four time, for the first 3 times I used WSO2 MB as a broker, and the last time I tried Apache ActiveMQ but the problem is, when I shut down SimpleQuoteService server and send messages to the proxy via SoapUI , they accumulate in my queue and my scheduled message forwarding processor becomes inactivated after reaching max delivery attempts but WSO2-ESB documentation is sayinq :
"To test the failover scenario, shut down the JMS broker(i.e., the original message store) and send a few messages to the proxy service.
You will see that the messages are not sent to the back-end since the original message store is not available. You will also see that the messages are stored in the failover message store."
Anyone to explain?!!!

You can disable deactivating the message processor setting "max.delivery.drop" parameter to 'Enabled'. It will drop the message after max delivery attempts without deactivating the processor. See here for docs(definitions) of these parameters.

Related

Google Cloud PubSub Message Delivered More than Once before reaching deadline acknowledgement time

Background:
We configured cloud pubsub topic to interact within multiple app engine services,
There we have configured push based subscribers. We have configured its acknowledgement deadline to 600 seconds
Issue:
We have observed pubsub has pushed same message twice (more than twice from some other topics) to its subscribers, Looking at the log I can see this message push happened with the gap of just 1 Second, Ideally as we have configured ackDeadline to 600 seconds, pubsub should re-attempt message delivery only after 600 seconds.
Need following answers:
Why same message has got delivered more than once in 1 second only
Does pubsub doesn’t honors ackDeadline configuration before
reattempting message delivery?
References:
- https://cloud.google.com/pubsub/docs/subscriber
Message redelivery can happen for a couple of reasons. First of all, it is possible that a message got published twice. Sometimes the publisher will get back an error like a deadline exceeded, meaning the publish took longer than anticipated. The message may or may not have actually been published in this situation. Often, the correct action is for the publisher to retry the publish and in fact that is what the Google-provided client libraries do by default. Consequently, there may be two copies of the message that were successfully published, even though the client only got confirmation for one of them.
Secondly, Google Cloud Pub/Sub guarantees at-least-once delivery. This means that occasionally, messages can be redelivered, even if the ackDeadline has not yet passed or an ack was sent back to the service. Acknowledgements are best effort and most of the time, they are successfully processed by the service. However, due to network glitches, server restarts, and other regular occurrences of that nature, sometimes the acknowledgements sent by the subscriber will not be processed, resulting in message redelivery.
A subscriber should be designed to be resilient to these occasional redeliveries, generally by ensuring that operations are idempotent, i.e., that the results of processing the message multiple times are the same, or by tracking and catching duplicates. Alternatively, one can use Cloud Dataflow as a subscriber to remove duplicates.

What is the Message broker retry intval and how to configure

I have used WSO2 Message Broker MB300 server for comiunicate each micro service. That is using Topic connection.
In the dash board "Durable Topic Subscriptions" section and "Number Of Messages Delivery Pending" column message showing as pending. That count getting increase. Any configuration for Message delivery delay or retry interval?
Redelivery delay is introduced from MB 3.2.0 version which can be set as a system property. System.setProperty("AndesRedeliveryDelay", "10000"); Also, maximum redelivery attempts can be set in broker.xml by setting the value as follows, <maximumRedeliveryAttempts>10</maximumRedeliveryAttempts>

Messages are not automatically moved to dead letter channel (DLC) - broker - wso2 ei

Im using WSO2 EI 6.1.1 with Message Broker, and trying to create message queue with message store and message process with an endpoint.
When I shutdown my endpoint, the message processor is deactivated and the messages stay in the queue and they are not moved to DLC.
What should I do to make it work ?
Thanks,
Faris Shomou
This is the expected behavior with message processor / message store :
A scheduled message processor will try to send the message until the
delivery is successfull (and offers you a way to implement guaranteed
delivery pattern)
A sampling message processor will send the message
in a non reliable way (it can be lost)
If you want to manage a JMS transaction and have the message to go in DLQ, use a jms inbound endpoint or jms proxy and set required parameters (transport.jms.SessionTransacted, transport.jms.SessionAcknowledgement : have a look to wso2 documentation https://docs.wso2.com/display/EI611/JMS+Transactions)
Message store / processor is used to implements Dead letter channel EIP : the jms store host the dead message and you don't want it to be moved elsewhere

WSO2 delivery-garantee pattern implementation: doesn't work sampling processor with more than 20 attempts

I'm quite a newbie in WSO2 so sorry for the mistakes (and for my english too ... )
I need to implement a proxy with delivery-garantee pattern and here you are my solution (I'm started from this post http://charith.wickramaarachchi.org/2012/05/another-message-redelivery-pattern-with.html):
a proxy invoke an external service giving, as input, the initial
client message
if the external service is running all works fine and
the reply is given to the client
if the external service is down or generate a SOAP fault, I'll
put the message in a store (retry store), and then, using a sampling
processor (after a time "t"), I'll try again for "n" max attempts:
at any attempt, if the external service is down or generate a SOAP
fault, I'll put the message again in the retry store, and the
process is repeated
after "n" attempts, if the external service is still out of
service, the message is stored in another store (garbage store)
All works fine when I try to test with one message, but when I try to test with more messages (> 20 but this number is variable ... ), the sampling processor hangs completely, nothing is shown in the logs. Looking in the console, sometimes (but not always ...), the processor is off, deactivate and in this case, to restore, I need to undeploy, stop and restart, and then deploy again my .car.
NOTE: I've to use the sampling processor and not the forwarding processor because this processor, after "n" attempts deactive itself and I can't use it for my goals.
I can't put here the complete code because is too long, but I can give you a sample .car that you can deploy and execute on your WSO2 installation (to simulate the external service I've used the echo service ...).
Here you are the sample car that you can download
Thank you very much in advance: all suggestions are appreciated!!!
Cesare
Message Forwarding Processor
Retrieves the messages stored in a message store and reliably forwards them to a specified endpoint. This processor attempts to send one message at a time and it does not dequeue a message from the store until it receives a response from the target endpoint. Therefore this processor is ideal for implementing in-order delivery scenarios and guaranteed delivery scenarios.
Sampling Processor
Retrieves the messages stored in a message store and injects them to a given sequence at specified intervals. This processor utilizes the Quartz scheduler framework for periodically processing messages. This can be used to implement message rate throttling scenarios.
--> You can use the forwarding processor and configure it so that it will never be deactivated, just add this parameter : <parameter name="max.delivery.attempts">-1</parameter>

How to ensure that a Text Message was sent via JMS succesfull?

i have wrote a Text Message Sender Program via JMS with C++ following.
tibems_status status = TIBEMS_OK;
status = tibemsMsgProducer_SendToDestination(
m_tProducer,
m_tDestination,
m_tMsg );
Suppose status == 0, this means only that Function has worked succesfull. It doesn't mean that my Text Message was sent succesfull
How can I ensure that my Message was sent succesfull? Should I get a ID or Acknowledge from JMS Queue back?
It depends on the Message Delivery Mode.
When a PERSISTENT message is sent, the tibemsMsgProducer_SendToDestination call will wait for the EMS server to reply with a confirmation.
When a NON_PERSISTENT message is sent, the tibemsMsgProducer_SendToDestination call may or may not wait for a confirmation depending on if authorization is enabled and the npsend_check_mode setting. See the EMS docs (linked above) for specific details.
Lastly, when a RELIABLE_DELIVERY message is sent, the tibemsMsgProducer_SendToDestination call does not wait for a confirmation and will only fail if the connection to the EMS server is lost.
However, even in the situations where a confirmation is sent, this is only confirmation that the EMS server has received the message. It does not confirm that the message was received and processed by the message consumer. EMS Monitoring Messages can be used to determine if the message was acknowledged by the consumer.
The message monitoring topics are in the form $sys.monitor.<D>.<E>.<destination>, where <D> matches Q|q|T|t, <E> matches s|r|a|p|\* and <destination> is the name of the destination. For instance to monitor for message acknowledgment for the queue named beterman, your program would subscribe to $sys.monitor.q.a.beterman (or $sys.monitor.Q.a.beterman if you want a copy of the message that was acknowledged).
The monitoring messages contain many properties, including the msg_id, source_name and target_name. You can use that information to correlate it back to the message you sent.
Otherwise, the simpler option is to use a tibemsMsgRequestor instead of a tibemsMsgProducer. tibemsMsgRequestor_Request will send the message and wait for a reply from the recipient. In this case you are best to use RELIABLE_DELIVERY and NO_ACKNOWLEDGE to remove all the confirmation and acknowledgement messages between the producer and the EMS server and the EMS server and the consumer.
However, if you do go down the tibemsMsgRequestor route, then you may also want to consider simply using a HTTP request instead, with a load balancer in place of the EMS server. Architecturally there isn't much difference between the two options (EMS uses persistent TCP connections, HTTP doesn't)
Producer -> EMS Server -> ConsumerA
-> ConsumerB
Client -> Load Balancer -> ServerA
-> ServerB
But with HTTP you have clear semantics for each of the methods. GET is safe (does not change state), PUT and DELETE are idempotent (multiple identical requests should have the same effect as a single request), and POST is non-idempotent (it causes a change in server state each time it is performed), etc. You also have well defined status codes. If you're using tibemsMsgRequestor you'll need to create bespoke semantics and response status, which will require extra effort to create, maintain and to train the other developers in your team on.
Also, it far easier to find developers with HTTP skills than EMS skills and it's far easier to find information HTTP that EMS, so the tibemsMsgRequestor option will make recruiting more difficult and problem solving issues more difficult.
Because of this HTTP is a better option IMO, for request-reply or for when you want to ensure that that the message sent was processed successfully.