Which one is synchronous or asynchronous communication ? And Why? - web-services

I am confuse about both communication for the given scenario.I feel that every single list item can be synchronous communication.
Order service calling the shipping service to proceed for shipment.
User buying items from User Interface(UI) Service resulting in
invocation of Order Service.
User Interface(UI) service calling catalog service to get information
about all of the items that it needs to render.

All three examples would be considered asynchronous as they prompt a response due to cause and effect - call and respond. While all three of these could happen concurrently, each in and of themselves is not synchronous.
Synchronous communication happens simultaneously, like two people editing the same document online. Each editor reads and writes at the same time, but does not interrupt the other in any way.
The best example of synchronous communication is a telephone conversation. All connected parties can hear (receive) & speak (transmit) at the same time, and although humans have difficulting performing both actions simultaneously, the telephone connection itself has no trouble providing both concurrently.
Asynchronous acts like a two-way radio. You must stop transmitting in order to receive.

Synchronous = in synch
Sender wait for a response from the receiver to continue further.
Both Sender and Receiver should be in active state.
Sender send data to receiver as it requires an immediate response to continue processing.
When you execute something synchronously, you wait for it to finish before moving on to another task.
Asynchronous = out of synch
Sender does not wait for a response from the receiver
Receiver can be inactive.
Once Receiver is active, it will receive and process.
Sender puts data in message queue and does not require an immediate response to continue processing.
When you execute something asynchronously, you can move on to another task before it finishes.
In your case,
Catalog Service <-- UI --> Order Service --> Shipment service
1) UI has to fetch item details from Catalog Service (Synchronous because it needs item immedietly)
2) Once all items selected, UI has to invoke Order service.(synchronous / asynchronous, depends upon user action)
User might add in shopping cart for future use (or) in favourites (or) to immediate process order.
3) Once all items exist in shopping cart collection , it has to invoke shipmentService. (asynchronous)
Payment should be synchronous. You need acknowledgement.
Assuming all payment and other stuff done, it calls shipment delivery service
Delivery is asynchronous because it cant get acknowledge immedietly. It may take 2 days delay etc.

Related

Applied Eventually Consistency and Race Conditions

I have a question regarding the effect of eventually consistent (EC) microservice systems.
Imagine we have a booking system - a user-service A and booking-service B. Each service has its own database. Imagine the system does a concurrent booking of the same resource for distinct users at the same time. Lets assume we have a Runtime Verification System checking the concurrent booking.
Would it be possible that the monitor does not realize the concurrent booking at B, because the update in the database is done delayed because of the EC mechanism?
In your example, the Booking Service is the source of truth (presumably) for whether or not the resource is available to book. So, that service should be pretty clear on allowing the first booking request to happen and rejecting the second.
In a case like this, where "first come first served" is the requirement, you'd want an intermediate state that would wait for a response from the Booking Service and update the User Service only when a response has been received.
If your architecture is set up right, User Service shouldn't be calling Booking Service directly anyway - it should be communicating through a messaging plane. As such, when the User clicks "Book Now," you could generate a resourceBookingRequested message and submit it to the queue. You'd acknowledge this request has been queued to the user and update their UI to "Awaiting Booking Confirmation..." or something similar.
Once the booking is accepted, or rejected, the User Service subscribes to the resulting message and updates the UI (and/or takes other actions like sending an email) to let the user know their request succeeded or didn't.

Ability to ensure message was successfully sent to Event Hub from APIM

Is it possible to ensure that a message was successfully delivered to an Event Hub when sending it with the log-to-eventhub policy in API Management?
Edit: In our solution we cannot allow any request to proceed if a message was not delivered to the Event Hub. As far as I can tell the log-to-eventhub policy doesn't check for this.
Welcome to Stackoveflow!
Note: Once the data has been passed to an Event Hub, it is persisted and will wait for Event Hub consumers to process it. The Event Hub does not care how it is processed; it just cares about making sure the message will be successfully delivered.
For more details, refer “Why send to an Azure Event Hub?”.
Hope this helps.
Event Hubs is built on top of Service Bus. According to the Service Bus documentation,
Using any of the supported Service Bus API clients, send operations into Service Bus are always explicitly settled, meaning that the API operation waits for an acceptance result from Service Bus to arrive, and then completes the send operation.
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a "tracking-id" inside of it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
When using the AMQP protocol, which is the exclusive protocol for the .NET Standard client and the Java client and which is an option for the .NET Framework client, message transfers and settlements are pipelined and completely asynchronous, and it is recommended that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
I think this means the SDK is getting a receipt for each message.
This theory is further aided by the RetryPolicy Class used in the ClientEntity.RetryPolicy Property of the EventHubSender Class.
In the API Management section on logging-to-eventhub, there is also a section on retry intervals. Below that are sections on modifying the return response or taking action on certain status codes.
Once the status codes of a failed logging attempt are known, you can modify the policies to take action on failed logging attempts.

How do I notify the client application when a chaincode is invoked?

When a chaincode is invoked, is there a way to call a REST API (external) so that the client application can be notified on the new transaction.
Apart from REST, is there any other option?
It's better to use events
https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#35-events
Validating peers and chaincodes can emit events on the network that
applications may listen for and take actions on. There is a set of
pre-defined events, and chaincodes can generate custom events. Events
are consumed by 1 or more event adapters. Adapters may further deliver
events using other vehicles such as Web hooks or Kafka.
Application can subscribe for events stream from Fabric and listen for messages generate by your chaincode.
An example for how to work with Events can be found here:
https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener
To add to Sergey's answer, there are 3 types of events.
BLOCK EVENTs, which are created when the ledger changes.
REJECTION EVENTs, which are created when any error occur( either in user chain code or in system chain code )
CHAINCODE EVENTs, which are user handles which lets user chain code create events. [ Weird thing I noticed is, only one CHAINCODE EVENT per invoke is allowed as per current design ]
You can have an event listener/client running at your end, listening on the gRPC port, ( you can get the port from the core.yaml file ) Or you can even refer to the example Sergey has mentioned.
In your case, I am guessing that you are looking for a successful transaction. In that case, you should listen on BLOCK events and REJECTION Events. The Transaction UUID which you received when your invoke was triggered, can be used to scan the events and trigger an action when it matches.
Also note that if a transaction results in REJECTION EVENT, then it would not have a BLOCK EVENT.
Hope this helps.

How to stream a queue across multiple subscriber?

What I am trying to accomplish on higher level:
I have a function that does I/O and generate messages. I have multiple subscriber clients that can subscribe or leave at any time. When a new client subscribes, it should get x number of previous output before streaming new messages (much like unix "tail -f").
My idea was to send-off the messages to an agent, which is a ring buffer. New clients will read the agent and then add-watch to the agent. Problem is, how can I ensure no new message arrive between reading and add-watch?
Next idea was to create 2 refs, one for a list of clients, one for the ring buffer. I can then add clients or post message in transactions. Problem is, when I add clients, I have to read the ring buffer and send it to the client (I/O). This is side effect in transaction that may be retried.
Last idea is to use locks, but that can't be the only way?

Architecture for robust payment processing

Imagine 3 system components:
1. External ecommerce web service to process credit card transactions
2. Local Database to store processing results
3. Local UI (or win service) to perform payment processing of the customer order document
The external web service is obviously not transactional, so how to guarantee:
1. results to be eventually persisted to database when received from web service even in case the database is not accessible at that moment(network issue, db timeout)
2. prevent clients from processing the customer order while payment initiated by other client but results not successfully persisted to database yet(and waiting in some kind of recovery queue)
The aim is to do processing having non transactional system components and guarantee the transaction won't be repeated by other process in case of failure.
(please look at it in the context of post sell payment processing, where multiple operators might attempt manual payment processing; not web checkout application)
Ask the payment processor whether they can detect duplicate transactions based on an order ID you supply. Then if you are unable to store the response due to a database failure, you can safely resubmit the request without fear of double-charging (at least one PSP I've used returned the same response/auth code in this scenario, along with a flag to say that this was a duplicate).
Alternatively, just set a flag on your order immediately before attempting payment, and don't attempt payment if the flag was already set. If an error then occurs during payment, you can investigate and fix the data at your leisure.
I'd be reluctant to go down the route of trying to automatically cancel the order and resubmitting, as this just gets confusing (e.g. what if cancelling fails - should you retry or not?). Best to keep the logic simple so when something goes wrong you know exactly where you stand.
In any system like this, you need robust error handling and error reporting. This is doubly true when it comes to dealing with payments, where you absolutely do not want to accidentaly take someone's money and not deliver the goods.
Because you're outsourcing your payment handling to a 3rd party, you're ultimately very reliant on the gateway having robust error handling and reporting systems.
In general then, you hand off control to the payment gateway and start a task that waits for a response from the gateway, which is either 'payment accepted' or 'payment declined'. When you get that response you move onto the next step in your process and everything is good.
When you don't get a response at all (time out), or the response is invalid, then how you proceed very much depends on the payment gateway:
If the gateway supports it send a 'cancel payment' style request. If the payment cancels successfully then you probably want to send the user to a 'sorry, please try again' style page.
If the gateway doesn't support canceling, or you have no communications to the gateway then you will need to manually (in person, such as telephone) contact the 3rd party to discover what went wrong and how to proceed. To aid this you need to dump as much detail as you have to error logs, such as date/time, customer id, transaction value, product ids etc.
Once you're back on your site (and payment is accepted) then you're much more in control of errors, but in brief if you cant complete the order, then you should either dump the details to disk (such as csv file for manual handling) or contact the gateway to cancel the payment.
Its also worth having a system in place to track errors as they occur, and if an excessive number occur then consider what should happen. If its a high traffic site for example you may want to temporarily prevent further customers from placing orders whilst the issue is investigated.
Distributed messaging.
When your payment gateway returns submit a message to a durable queue that guarantees a handler will eventually get it and process it. The handler would update the database. Should failure occur at that point the handler can leave the message in the queue or repost it to the queue, or post an alternate message.
Should something occur later that invalidates the transaction, another message could be queued to "undo" the change.
There's a fair amount of buzz lately about eventual consistency and distribute messaging. NServiceBus is the new component hotness. I suggest looking into this, I know we are.