How to ensure that a Text Message was sent via JMS succesfull? - c++

i have wrote a Text Message Sender Program via JMS with C++ following.
tibems_status status = TIBEMS_OK;
status = tibemsMsgProducer_SendToDestination(
m_tProducer,
m_tDestination,
m_tMsg );
Suppose status == 0, this means only that Function has worked succesfull. It doesn't mean that my Text Message was sent succesfull
How can I ensure that my Message was sent succesfull? Should I get a ID or Acknowledge from JMS Queue back?

It depends on the Message Delivery Mode.
When a PERSISTENT message is sent, the tibemsMsgProducer_SendToDestination call will wait for the EMS server to reply with a confirmation.
When a NON_PERSISTENT message is sent, the tibemsMsgProducer_SendToDestination call may or may not wait for a confirmation depending on if authorization is enabled and the npsend_check_mode setting. See the EMS docs (linked above) for specific details.
Lastly, when a RELIABLE_DELIVERY message is sent, the tibemsMsgProducer_SendToDestination call does not wait for a confirmation and will only fail if the connection to the EMS server is lost.
However, even in the situations where a confirmation is sent, this is only confirmation that the EMS server has received the message. It does not confirm that the message was received and processed by the message consumer. EMS Monitoring Messages can be used to determine if the message was acknowledged by the consumer.
The message monitoring topics are in the form $sys.monitor.<D>.<E>.<destination>, where <D> matches Q|q|T|t, <E> matches s|r|a|p|\* and <destination> is the name of the destination. For instance to monitor for message acknowledgment for the queue named beterman, your program would subscribe to $sys.monitor.q.a.beterman (or $sys.monitor.Q.a.beterman if you want a copy of the message that was acknowledged).
The monitoring messages contain many properties, including the msg_id, source_name and target_name. You can use that information to correlate it back to the message you sent.
Otherwise, the simpler option is to use a tibemsMsgRequestor instead of a tibemsMsgProducer. tibemsMsgRequestor_Request will send the message and wait for a reply from the recipient. In this case you are best to use RELIABLE_DELIVERY and NO_ACKNOWLEDGE to remove all the confirmation and acknowledgement messages between the producer and the EMS server and the EMS server and the consumer.
However, if you do go down the tibemsMsgRequestor route, then you may also want to consider simply using a HTTP request instead, with a load balancer in place of the EMS server. Architecturally there isn't much difference between the two options (EMS uses persistent TCP connections, HTTP doesn't)
Producer -> EMS Server -> ConsumerA
-> ConsumerB
Client -> Load Balancer -> ServerA
-> ServerB
But with HTTP you have clear semantics for each of the methods. GET is safe (does not change state), PUT and DELETE are idempotent (multiple identical requests should have the same effect as a single request), and POST is non-idempotent (it causes a change in server state each time it is performed), etc. You also have well defined status codes. If you're using tibemsMsgRequestor you'll need to create bespoke semantics and response status, which will require extra effort to create, maintain and to train the other developers in your team on.
Also, it far easier to find developers with HTTP skills than EMS skills and it's far easier to find information HTTP that EMS, so the tibemsMsgRequestor option will make recruiting more difficult and problem solving issues more difficult.
Because of this HTTP is a better option IMO, for request-reply or for when you want to ensure that that the message sent was processed successfully.

Related

AWS SQS - when will the duplicated message arrive?

I understand that standard SQS uses "at least once" delivery while FIFO messages are delivered exactly once. I'm trying to weigh standard queues vs FIFO for my application, and one factor is how long it takes for the duplicated message to arrive.
I intend to consume messages from SQS then post the data I received to an idempotent third-party API. I understand that with standard SQS, there's always a risk of me overwriting more recent data with the old duplicated data.
For example:
Message A arrives, I post it onwards.
Message A duplicate arrives, I post it onwards.
Message B arrives, I post it onwards.
All fine ✓
On the other hand:
Message A arrives, I post it onwards.
Message B arrives, I post it onwards.
Message A duplicate arrives - I post it and overwrite the latest data, which was B! ✖
I want to measure this risk, i.e. I want to know how long the duplicate message should take to arrive. Will the duplicate message take roughly the same amount of time to arrive, as the original message?
Maybe it's useful to understand how message duplication occurs. As far as I know this isn't documented in the official docs, but instead it's my mental model of how it works. This is an educated guess.
Whenever you send a message to SQS (SendMessage API), this message arrives at the SQS webservice endpoint, which is one of probably thousands of servers. This endpoint receives your message, duplicates it one or more times and stores these duplicates on more than one SQS server. After it has received confirmation from at least two SQS servers, it acknowledges to the client that the message has been received.
When you call the ReceiveMessage API only a subset of the SQS servers that handle your queue are queried for messages. When a message is returned, these servers communicate to their peers, that this message is currently in-flight and the visibility timeout starts. This doesn't happen instantaneously, as it's a distributed system. While this ReceiveMessage call takes place another consumer might also do a ReceiveMessage call and happen to query one of the servers that have a replica of the message, before it's marked as in-flight. That server hands out the message and now you have to consumers working on it.
This is just one scenario, which is the result of this being a distributed system.
There are a couple of edge cases that can happen as the result of network issues, e.g. when the SQS response to the initial SendMessage gets lost and the client thinks the message didn't arrive and sends it again - poof, you got another duplicate.
The point being: things fail in weird and complex ways. That makes measuring the risk of a delayed message difficult. If your use case can't handle duplicate and out of order messages, you should go for FIFO, but that will inherently limit your throughput. Alternatives are based on distributed locking mechanisms and keeping track of which messages you have already processed, which are complex tools to solve a complex problem.

C++/TCP Server - Convenient way of event notification

I have created a simple C++ TCP Server application.
Client connects and receives back as a simple echo everything that the client sends to the server. No purpose at all except for me to test the communication.
So far so good. What comes as next task for me is to decide a way of how to send a notification to the server that specific event has started.
Some event examples:
Player wrote a message - Server accepts the data sent from the client and recognizes that it's a chat message and sends back data to all connected clients that there is new message. Clients recognize that there is new message incoming.
Player is casting spell.
Player has died
Many more examples but you get the main idea.
I was thinking of sending all the data in json format and there all messages will contain identifiers like
0x01 is message event.
0x02 is casting spell event.
0x03 is player dead event.
And once identifier is send server can recognize what event the client is asking/informing and can apply the needed logic behind.
My question is isn't there a better approach to identify for what event the server is notified ?
I am in a search of better approach before I take this road.
You can take a look at standard iso8583 message, it's a financial message but every message has a processing code that determine what action should be done for each incoming message.

Ability to ensure message was successfully sent to Event Hub from APIM

Is it possible to ensure that a message was successfully delivered to an Event Hub when sending it with the log-to-eventhub policy in API Management?
Edit: In our solution we cannot allow any request to proceed if a message was not delivered to the Event Hub. As far as I can tell the log-to-eventhub policy doesn't check for this.
Welcome to Stackoveflow!
Note: Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed; it just cares about making sure the message will be successfully delivered.
For more details, refer “Why send to an Azure Event Hub?”.
Hope this helps.
Event Hubs is built on top of Service Bus. According to the Service Bus documentation,
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and which is an option for the .NET Framework client, message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
I think this means the SDK is getting a receipt for each message.
This theory is further aided by the RetryPolicy Class used in the ClientEntity.RetryPolicy Property of the EventHubSender Class.
In the API Management section on logging-to-eventhub, there is also a section on retry intervals. Below that are sections on modifying the return response or taking action on certain status codes.
Once the status codes of a failed logging attempt are known, you can modify the policies to take action on failed logging attempts.

another reliable way to do PULL-PUSH sync in ZeroMQ

If you're using PUSH sockets, you'll find that the first PULL socket to connect will grab an unfair share of messages. The accurate rotation of messages only happens when all PULL sockets are successfully connected, which can take some milliseconds. As an alternative to PUSH/PULL, for lower data rates, consider using ROUTER/DEALER and the load balancing pattern.
So one way to do sync in PUSH/PULL is using the load balancing pattern.
For this specific case below, I wonder whether there is another way to do sync:
I could set the PULL endpoint in worker to block until the connection successfully setup, and then send a special message via worker's PULL endpoint to 'sink'. After 'sink' receives #worker's special messages, 'sink' sends a message with REQ-REP to 'ventilator' to notify that all workers ready. 'ventilator' starts to distribute jobs to workers.
Is it reliable?
The picture is from here
Yes, so long as the Sink knows how many Workers to wait for before telling the Ventilator that it's OK to start sending messages. There's the question of whether the special messages from the Workers get through if they start up before the Sink connects - but you could solve that by having them keep sending their special message until they start getting data from the Ventilator. If you do this, the Sink would of course simply ignore any duplicates it receives.
Of course, that's not quite the same as the Workers having a live, working connection to the Ventilator, but that could itself be sending out special do-nothing messages that the Workers receive. When they receive one of those that's when they can start sending a special message to the Sink.

sleekxmpp send message to all the resource with same user name

I am trying to send a message from user-B to all the resources logged in with username user-A. But only the first resource alone that was logged in is getting the message. This is similar to presence being broadcast to all the resources within a user. Is there a way to do this using sleek-xmpp?
I tried using send_message
self.send_message(mto='userA#testserver',
mbody='sending - chat message ',
mtype='chat')
But it is received by only the first resource that was logged in .
The server that I am using is Openfire .
It is not the sender, nor the sender's server, but the recipient's server that controls which of the recipient's resources receive a message with type='chat'. Typically, this is based on the priority of the presences set by the recipient's resources.
There are some workarounds, though:
Use a type='headline' message (https://www.rfc-editor.org/rfc/rfc6121#section-5.2.2):
If the 'to' address is the bare JID, the receiving server SHOULD deliver the message to all of the recipient's available resources with non-negative presence priority and MUST deliver the message to at least one of those resources;
Ask the recipient to use clients that support XEP-0280. This allows clients to opt-in to receiving every chat message.
If you have a subscription to the recipient's presence, you can send a separate message to each resource, but that's a very bad idea in many regards (one of them: it can cause duplicates in the offline storage if some resources went offline in the mean time).