Ability to ensure message was successfully sent to Event Hub from APIM - azure-eventhub

Is it possible to ensure that a message was successfully delivered to an Event Hub when sending it with the log-to-eventhub policy in API Management?
Edit: In our solution we cannot allow any request to proceed if a message was not delivered to the Event Hub. As far as I can tell the log-to-eventhub policy doesn't check for this.

Welcome to Stackoveflow!
Note: Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed; it just cares about making sure the message will be successfully delivered.
For more details, refer “Why send to an Azure Event Hub?”.
Hope this helps.

Event Hubs is built on top of Service Bus. According to the Service Bus documentation,
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and which is an option for the .NET Framework client, message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
I think this means the SDK is getting a receipt for each message.
This theory is further aided by the RetryPolicy Class used in the ClientEntity.RetryPolicy Property of the EventHubSender Class.
In the API Management section on logging-to-eventhub, there is also a section on retry intervals. Below that are sections on modifying the return response or taking action on certain status codes.
Once the status codes of a failed logging attempt are known, you can modify the policies to take action on failed logging attempts.

Related

How to stream events with GCP platform?

I am looking into building a simple solution where producer services push events to a message queue and then have a streaming service make those available through gRPC streaming API.
Cloud Pub/Sub seems well suited for the job however scaling the streaming service means that each copy of that service would need to create its own subscription and delete it before scaling down and that seems unnecessarily complicated and not what the platform was intended for.
On the other hand Kafka seems to work well for something like this but I'd like to avoid having to manage the underlying platform itself and instead leverage the cloud infrastructure.
I should also mention that the reason for having a streaming API is to allow for streaming towards a frontend (who may not have access to the underlying infrastructure)
Is there a better way to go about doing something like this with the GCP platform without going the route of deploying and managing my own infrastructure?
If you essentially want ephemeral subscriptions, then there are a few things you can set on the Subscription object when you create a subscription:
Set the expiration_policy to a smaller duration. When a subscriber is not receiving messages for that time period, the subscription will be deleted. The tradeoff is that if your subscriber is down due to a transient issue that lasts longer than this period, then the subscription will be deleted. By default, the expiration is 31 days. You can set this as low as 1 day. For pull subscribers, the subscribers simply need to stop issuing requests to Cloud Pub/Sub for the timer on their expiration to start. For push subscriptions, the timer starts based on when no messages are successfully delivered to the endpoint. Therefore, if no messages are published or if the endpoint is returning an error for all pushed messages, the timer is in effect.
Reduce the value of message_retention_duration. This is the time period for which messages are kept in the event a subscriber is not receiving messages and acking them. By default, this is 7 days. You can set it as low as 10 minutes. The tradeoff is that if your subscriber disconnects or gets behind in processing messages by more than this duration, messages older than that will be deleted and the subscriber will not see them.
Subscribers that cleanly shut down could probably just call DeleteSubscription themselves so that the subscription goes away immediately, but for ones that shut down unexpectedly, setting these two properties will minimize the time for which the subscription continues to exist and the number of messages (that will never get delivered) that will be retained.
Keep in mind that Cloud Pub/Sub quotas limit one to 10,000 subscriptions per topic and per project. Therefore, if a lot of subscriptions are created and either active or not cleaned up (manually, or automatically after expiration_policy's ttl has passed), then new subscriptions may not be able to be created.
I think your original idea was better than ephemeral subscriptions tbh. I mean it works, but it feels totally unnatural. Depending on what your requirements are. For example, do clients only need to receive messages while they're connected or do they all need to get all messages?
Only While Connected
Your original idea was better imo. What I probably would have done is to create a gRPC stream service that clients could connect to. The implementation is essentially an observer pattern. The consumer will receive a message and then iterate through the subscribers to do a "Send" to all of them. From there, any time a client connects to the service, it just registers itself with that observer collection and unregisters when it disconnects. Horizontal scaling is passive since clients are sticky to whatever instance they've connected to.
Everyone always get the message, if eventually
The concept is similar to the above but the client doesn't implicitly un-register from the observer on disconnect. Instead, it would register and un-register explicitly (through a method/command designed to do so). Modify the 'on disconnected' logic to tell the observer list that the client has gone offline. Then the consumer's broadcast logic is slightly different. Now it iterates through the list and says "if online, then send, else queue", and send the message to a ephemeral queue (that belongs to the client). Then your 'on connect' logic will send all messages that are in queue to the client before informing the consumer that it's back online. Basically an inbox. Setting up ephemeral, self-deleting queues is really easy in most products like RabbitMQ. I think you'll have to do a bit of managing whether or not it's ok to delete a queue though. For example, never delete the queue unless the client explicitly unsubscribes or has been inactive for so long. Fail to do that, and the whole inbox idea falls apart.
The selected answer above is most similar to what I'm subscribing here in that the subscription is the queue. If I did this, then I'd probably implement it as an internal bus instead of an observer (since it would be unnecessary) - You create a consumer on demand for a connecting client that literally just forwards the message. The message consumer subscribes and unsubscribes based on whether or not the client is connected. As Kamal noted, you'll run into problems if your scale exceeds the maximum number of subscriptions allowed by pubsub. If you find yourself in that position, then you can unshackle that constraint by implementing the pattern above. It's basically the same pattern but you shift the responsibility over to your infra where the only constraint is your own resources.
gRPC makes this mechanism pretty easy. Alternatively, for web, if you're on a Microsoft stack, then SignalR makes this pretty easy too. Clients connect to the hub, and you can publish to all connected clients. The consumer pattern here remains mostly the same, but you don't have to implement the observer pattern by hand.
(note: arrows in diagram are in the direction of dependency, not data flow)

Azure Service Bus Topic-with paired or retry

We are using Azure Service Bus Topic in workflow manager (approval process). In any way, we don’t want to lose/duplicate messages when we push messages to service bus topic. Now there are two options.
a. Use Retry the only
b. Use Paired service bus only without retry.
As we cannot use both together, let assume during message push, primary service bus is not available then message pus to paired service bus and when primary service bus available then automatically message push to the primary. But if we use retry, retry will try to push message to primary and as primary service bus is not available messages will go to paired service bus also. so there are chances to process duplicate messages.
Which is the best option “a” or “b”, to push message to service bus for the given problem statement?
Both options have their pros and cons.
With Paired Namespaces you get the ability to continue sending messages while your primary namespace is down. But don't get fooled. You only store those messages while the primary namespace is down. They are not retried by the reveiver. Other drawbacks include
No good testability.
Increased cost (you send to the secondary, retrieve back from it to send to the primary).
Failover to the secondary is not very intuitive. You have to manually retry the message after a failure. It is not automatically switches to the secondary namespace.
Have a look at this post for more details.
With retries approach you gain the simplicity. And something you'd need to do anyways. With Azure Service Bus operations can fail with intermittent exceptions and you should retry anyways. The drawback of having only retries - doesn't protect from outages. That's why you could combine it with a secondary namespace using custom implementation, but that's a whole different can of warms. Libraries like NServiceBus provides a custom implementation you can get the idea from.

How to expose an asynchronous api as a custom akka stream Source now that ActorPublisher is deprecated?

With ActorPublisher deprecated in favor of GraphStage, it looks as though I have to give up my actor-managed state for GraphStateLogic-managed state. But with the actor managed state I was able to mutate state by sending arbitrary messages to my actor and with GraphStateLogic I don't see how to do that.
So previously if I wanted to create a Source to expose data that is made available via HTTP request/response, then with ActorPublisher demand was communicated to my actor by Request messages to which I could react by kicking off an HTTP request in the background and send responses to my actor so I could send its contents downstream.
It is not obvious how to do this with a GraphStageLogic instance if I cannot send it arbitrary messages. Demand is communicated by OnPull() to which I can react by kicking off an HTTP request in the background. But then when the response comes in, how do I safely mutate the GraphStateLogic's state?
(aside: just in case it matters, I'm using Akka.Net, but I believe this applies to the whole Akka streams model. I assume the solution in Akka is also the solution in Akka.Net. I also assume that ActorPublisher will also be deprecated in Akka.Net eventually even though it is not at the moment.)
I believe that the question is referring to "asynchronous side-channels" and is discussed here:
http://doc.akka.io/docs/akka/2.5.3/scala/stream/stream-customize.html#using-asynchronous-side-channels.
Using asynchronous side-channels
In order to receive asynchronous events that are not arriving as stream elements (for example a completion of a future or a callback from a 3rd party API) one must acquire a AsyncCallback by calling getAsyncCallback() from the stage logic. The method getAsyncCallback takes as a parameter a callback that will be called once the asynchronous event fires.

How do I notify the client application when a chaincode is invoked?

When a chaincode is invoked, is there a way to call a REST API (external) so that the client application can be notified on the new transaction.
Apart from REST, is there any other option?
It's better to use events
https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#35-events
Validating peers and chaincodes can emit events on the network that
applications may listen for and take actions on. There is a set of
pre-defined events, and chaincodes can generate custom events. Events
are consumed by 1 or more event adapters. Adapters may further deliver
events using other vehicles such as Web hooks or Kafka.
Application can subscribe for events stream from Fabric and listen for messages generate by your chaincode.
An example for how to work with Events can be found here:
https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener
To add to Sergey's answer, there are 3 types of events.
BLOCK EVENTs, which are created when the ledger changes.
REJECTION EVENTs, which are created when any error occur( either in user chain code or in system chain code )
CHAINCODE EVENTs, which are user handles which lets user chain code create events. [ Weird thing I noticed is, only one CHAINCODE EVENT per invoke is allowed as per current design ]
You can have an event listener/client running at your end, listening on the gRPC port, ( you can get the port from the core.yaml file ) Or you can even refer to the example Sergey has mentioned.
In your case, I am guessing that you are looking for a successful transaction. In that case, you should listen on BLOCK events and REJECTION Events. The Transaction UUID which you received when your invoke was triggered, can be used to scan the events and trigger an action when it matches.
Also note that if a transaction results in REJECTION EVENT, then it would not have a BLOCK EVENT.
Hope this helps.

How to ensure that a Text Message was sent via JMS succesfull?

i have wrote a Text Message Sender Program via JMS with C++ following.
tibems_status status = TIBEMS_OK;
status = tibemsMsgProducer_SendToDestination(
m_tProducer,
m_tDestination,
m_tMsg );
Suppose status == 0, this means only that Function has worked succesfull. It doesn't mean that my Text Message was sent succesfull
How can I ensure that my Message was sent succesfull? Should I get a ID or Acknowledge from JMS Queue back?
It depends on the Message Delivery Mode.
When a PERSISTENT message is sent, the tibemsMsgProducer_SendToDestination call will wait for the EMS server to reply with a confirmation.
When a NON_PERSISTENT message is sent, the tibemsMsgProducer_SendToDestination call may or may not wait for a confirmation depending on if authorization is enabled and the npsend_check_mode setting. See the EMS docs (linked above) for specific details.
Lastly, when a RELIABLE_DELIVERY message is sent, the tibemsMsgProducer_SendToDestination call does not wait for a confirmation and will only fail if the connection to the EMS server is lost.
However, even in the situations where a confirmation is sent, this is only confirmation that the EMS server has received the message. It does not confirm that the message was received and processed by the message consumer. EMS Monitoring Messages can be used to determine if the message was acknowledged by the consumer.
The message monitoring topics are in the form $sys.monitor.<D>.<E>.<destination>, where <D> matches Q|q|T|t, <E> matches s|r|a|p|\* and <destination> is the name of the destination. For instance to monitor for message acknowledgment for the queue named beterman, your program would subscribe to $sys.monitor.q.a.beterman (or $sys.monitor.Q.a.beterman if you want a copy of the message that was acknowledged).
The monitoring messages contain many properties, including the msg_id, source_name and target_name. You can use that information to correlate it back to the message you sent.
Otherwise, the simpler option is to use a tibemsMsgRequestor instead of a tibemsMsgProducer. tibemsMsgRequestor_Request will send the message and wait for a reply from the recipient. In this case you are best to use RELIABLE_DELIVERY and NO_ACKNOWLEDGE to remove all the confirmation and acknowledgement messages between the producer and the EMS server and the EMS server and the consumer.
However, if you do go down the tibemsMsgRequestor route, then you may also want to consider simply using a HTTP request instead, with a load balancer in place of the EMS server. Architecturally there isn't much difference between the two options (EMS uses persistent TCP connections, HTTP doesn't)
Producer -> EMS Server -> ConsumerA
-> ConsumerB
Client -> Load Balancer -> ServerA
-> ServerB
But with HTTP you have clear semantics for each of the methods. GET is safe (does not change state), PUT and DELETE are idempotent (multiple identical requests should have the same effect as a single request), and POST is non-idempotent (it causes a change in server state each time it is performed), etc. You also have well defined status codes. If you're using tibemsMsgRequestor you'll need to create bespoke semantics and response status, which will require extra effort to create, maintain and to train the other developers in your team on.
Also, it far easier to find developers with HTTP skills than EMS skills and it's far easier to find information HTTP that EMS, so the tibemsMsgRequestor option will make recruiting more difficult and problem solving issues more difficult.
Because of this HTTP is a better option IMO, for request-reply or for when you want to ensure that that the message sent was processed successfully.