How do I notify the client application when a chaincode is invoked? - blockchain

When a chaincode is invoked, is there a way to call a REST API (external) so that the client application can be notified on the new transaction.
Apart from REST, is there any other option?

It's better to use events
https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#35-events
Validating peers and chaincodes can emit events on the network that
applications may listen for and take actions on. There is a set of
pre-defined events, and chaincodes can generate custom events. Events
are consumed by 1 or more event adapters. Adapters may further deliver
events using other vehicles such as Web hooks or Kafka.
Application can subscribe for events stream from Fabric and listen for messages generate by your chaincode.
An example for how to work with Events can be found here:
https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener

To add to Sergey's answer, there are 3 types of events.
BLOCK EVENTs, which are created when the ledger changes.
REJECTION EVENTs, which are created when any error occur( either in user chain code or in system chain code )
CHAINCODE EVENTs, which are user handles which lets user chain code create events. [ Weird thing I noticed is, only one CHAINCODE EVENT per invoke is allowed as per current design ]
You can have an event listener/client running at your end, listening on the gRPC port, ( you can get the port from the core.yaml file ) Or you can even refer to the example Sergey has mentioned.
In your case, I am guessing that you are looking for a successful transaction. In that case, you should listen on BLOCK events and REJECTION Events. The Transaction UUID which you received when your invoke was triggered, can be used to scan the events and trigger an action when it matches.
Also note that if a transaction results in REJECTION EVENT, then it would not have a BLOCK EVENT.
Hope this helps.

Related

IoT lifecycle events handling

What is the best practice to check if AWS IoT Core thing is still offline?
Being able to query the state of an AWS IoT thing will for many be an essential part of their application. Lucky AWS has a best practise on how to get lifecycle events here: https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html
It says that we should check if device is still offline, before performing any actions.
I'm handling it on nodeJs server (listening to events), so the question is, what's the best way to handle it?
For now the plan is, to create some storage (redis?), and implement some timeout(5-10 sec), if I received disconnect event, I'll put it in DB, wait timeout, and if no other messages regarding this device will come (Connected), I'll do some logic.
Is this right approach?
The point is, not to use SQS from aws.
And as AWS docs says, the order of messages is not guaranteed, so what's the best practise to handle it?)
If your device emits a signal at every periodic intervals, then you can treat that as a heartbeat signal.
You can maintain a timer (x minutes/hours etc) and wait for the heartbeat signal from the device.
If the timer times out and you have not received the hearbeat signal, then it is safe to assume that the device has gone offline. Such events are easy to model as a detector model in the IoT Events.
This example from AWS IoT Events is doing exactly the same thing.

Ability to ensure message was successfully sent to Event Hub from APIM

Is it possible to ensure that a message was successfully delivered to an Event Hub when sending it with the log-to-eventhub policy in API Management?
Edit: In our solution we cannot allow any request to proceed if a message was not delivered to the Event Hub. As far as I can tell the log-to-eventhub policy doesn't check for this.
Welcome to Stackoveflow!
Note: Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed; it just cares about making sure the message will be successfully delivered.
For more details, refer “Why send to an Azure Event Hub?”.
Hope this helps.
Event Hubs is built on top of Service Bus. According to the Service Bus documentation,
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and which is an option for the .NET Framework client, message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
I think this means the SDK is getting a receipt for each message.
This theory is further aided by the RetryPolicy Class used in the ClientEntity.RetryPolicy Property of the EventHubSender Class.
In the API Management section on logging-to-eventhub, there is also a section on retry intervals. Below that are sections on modifying the return response or taking action on certain status codes.
Once the status codes of a failed logging attempt are known, you can modify the policies to take action on failed logging attempts.

How to stream events with GCP platform?

I am looking into building a simple solution where producer services push events to a message queue and then have a streaming service make those available through gRPC streaming API.
Cloud Pub/Sub seems well suited for the job however scaling the streaming service means that each copy of that service would need to create its own subscription and delete it before scaling down and that seems unnecessarily complicated and not what the platform was intended for.
On the other hand Kafka seems to work well for something like this but I'd like to avoid having to manage the underlying platform itself and instead leverage the cloud infrastructure.
I should also mention that the reason for having a streaming API is to allow for streaming towards a frontend (who may not have access to the underlying infrastructure)
Is there a better way to go about doing something like this with the GCP platform without going the route of deploying and managing my own infrastructure?
If you essentially want ephemeral subscriptions, then there are a few things you can set on the Subscription object when you create a subscription:
Set the expiration_policy to a smaller duration. When a subscriber is not receiving messages for that time period, the subscription will be deleted. The tradeoff is that if your subscriber is down due to a transient issue that lasts longer than this period, then the subscription will be deleted. By default, the expiration is 31 days. You can set this as low as 1 day. For pull subscribers, the subscribers simply need to stop issuing requests to Cloud Pub/Sub for the timer on their expiration to start. For push subscriptions, the timer starts based on when no messages are successfully delivered to the endpoint. Therefore, if no messages are published or if the endpoint is returning an error for all pushed messages, the timer is in effect.
Reduce the value of message_retention_duration. This is the time period for which messages are kept in the event a subscriber is not receiving messages and acking them. By default, this is 7 days. You can set it as low as 10 minutes. The tradeoff is that if your subscriber disconnects or gets behind in processing messages by more than this duration, messages older than that will be deleted and the subscriber will not see them.
Subscribers that cleanly shut down could probably just call DeleteSubscription themselves so that the subscription goes away immediately, but for ones that shut down unexpectedly, setting these two properties will minimize the time for which the subscription continues to exist and the number of messages (that will never get delivered) that will be retained.
Keep in mind that Cloud Pub/Sub quotas limit one to 10,000 subscriptions per topic and per project. Therefore, if a lot of subscriptions are created and either active or not cleaned up (manually, or automatically after expiration_policy's ttl has passed), then new subscriptions may not be able to be created.
I think your original idea was better than ephemeral subscriptions tbh. I mean it works, but it feels totally unnatural. Depending on what your requirements are. For example, do clients only need to receive messages while they're connected or do they all need to get all messages?
Only While Connected
Your original idea was better imo. What I probably would have done is to create a gRPC stream service that clients could connect to. The implementation is essentially an observer pattern. The consumer will receive a message and then iterate through the subscribers to do a "Send" to all of them. From there, any time a client connects to the service, it just registers itself with that observer collection and unregisters when it disconnects. Horizontal scaling is passive since clients are sticky to whatever instance they've connected to.
Everyone always get the message, if eventually
The concept is similar to the above but the client doesn't implicitly un-register from the observer on disconnect. Instead, it would register and un-register explicitly (through a method/command designed to do so). Modify the 'on disconnected' logic to tell the observer list that the client has gone offline. Then the consumer's broadcast logic is slightly different. Now it iterates through the list and says "if online, then send, else queue", and send the message to a ephemeral queue (that belongs to the client). Then your 'on connect' logic will send all messages that are in queue to the client before informing the consumer that it's back online. Basically an inbox. Setting up ephemeral, self-deleting queues is really easy in most products like RabbitMQ. I think you'll have to do a bit of managing whether or not it's ok to delete a queue though. For example, never delete the queue unless the client explicitly unsubscribes or has been inactive for so long. Fail to do that, and the whole inbox idea falls apart.
The selected answer above is most similar to what I'm subscribing here in that the subscription is the queue. If I did this, then I'd probably implement it as an internal bus instead of an observer (since it would be unnecessary) - You create a consumer on demand for a connecting client that literally just forwards the message. The message consumer subscribes and unsubscribes based on whether or not the client is connected. As Kamal noted, you'll run into problems if your scale exceeds the maximum number of subscriptions allowed by pubsub. If you find yourself in that position, then you can unshackle that constraint by implementing the pattern above. It's basically the same pattern but you shift the responsibility over to your infra where the only constraint is your own resources.
gRPC makes this mechanism pretty easy. Alternatively, for web, if you're on a Microsoft stack, then SignalR makes this pretty easy too. Clients connect to the hub, and you can publish to all connected clients. The consumer pattern here remains mostly the same, but you don't have to implement the observer pattern by hand.
(note: arrows in diagram are in the direction of dependency, not data flow)

Which one is synchronous or asynchronous communication ? And Why?

I am confuse about both communication for the given scenario.I feel that every single list item can be synchronous communication.
Order service calling the shipping service to proceed for shipment.
User buying items from User Interface(UI) Service resulting in
invocation of Order Service.
User Interface(UI) service calling catalog service to get information
about all of the items that it needs to render.
All three examples would be considered asynchronous as they prompt a response due to cause and effect - call and respond. While all three of these could happen concurrently, each in and of themselves is not synchronous.
Synchronous communication happens simultaneously, like two people editing the same document online. Each editor reads and writes at the same time, but does not interrupt the other in any way.
The best example of synchronous communication is a telephone conversation. All connected parties can hear (receive) & speak (transmit) at the same time, and although humans have difficulting performing both actions simultaneously, the telephone connection itself has no trouble providing both concurrently.
Asynchronous acts like a two-way radio. You must stop transmitting in order to receive.
Synchronous = in synch
Sender wait for a response from the receiver to continue further.
Both Sender and Receiver should be in active state.
Sender send data to receiver as it requires an immediate response to continue processing.
When you execute something synchronously, you wait for it to finish before moving on to another task.
Asynchronous = out of synch
Sender does not wait for a response from the receiver
Receiver can be inactive.
Once Receiver is active, it will receive and process.
Sender puts data in message queue and does not require an immediate response to continue processing.
When you execute something asynchronously, you can move on to another task before it finishes.
In your case,
Catalog Service <-- UI --> Order Service --> Shipment service
1) UI has to fetch item details from Catalog Service (Synchronous because it needs item immedietly)
2) Once all items selected, UI has to invoke Order service.(synchronous / asynchronous, depends upon user action)
User might add in shopping cart for future use (or) in favourites (or) to immediate process order.
3) Once all items exist in shopping cart collection , it has to invoke shipmentService. (asynchronous)
Payment should be synchronous. You need acknowledgement.
Assuming all payment and other stuff done, it calls shipment delivery service
Delivery is asynchronous because it cant get acknowledge immedietly. It may take 2 days delay etc.

Automate Suspended orchestrations to be resumed automatically

We have a BizTalk application which sends XML files to external applications by using a web-service.
BizTalk calls the web-services method by passing XML file and destination application URL as parameters.
If the external applications are not able to receive the XML, or if there is no response received from the web-service back to BizTalk the message gets suspended in BizTalk.
Presently for this situation we manually go to BizTalk admin and resume each suspended message.
Our clients want this process to be automated all, they want an dashboard which shows list of message details and a button, on its click all the suspended messages have to be resumed.
If you are doing this within an orchestration and catching the connection error, just add a delay shape configured to 5 hours. Or set a retry interval to 300 minutes and multiple retries on the send port if that makes sense. You can do this using the rule engine as well.
Why not implement an asynchronous pattern?
You make it so, so that the orchestration sends the file out via a send shape while initializing a certain correlation set.
You then put a listen shape with at one end:
- the receive (following the initialized correlation set)
- a delay shape set to 5 hours.
When you receive the message, your orchestration can handle it gracefully.
When you don't, the delay shape will kick in and you handle accordingly.
Benefit to this solution in comparison to the solution of 40Alpha will be that your orchestration will only 'wake up' from a dehydrated state if the timeout kicks in OR when the response is received. In the example of 40Alpha, the orchestration would wake up a lot of times, consuming extra resources.
You may want to look a product like BizTalk 360. It has those sort of monitoring and command built into it. I'm not sure it works with BizTalk 2006R2 though, but you should be thinking about moving off that platform anyway as it is going out of Microsoft support.