Event hub Handling receiver error for single event failure - azure-eventhub

is there a way for event hub listener to retry only the single failed event, or i have to fail the full batch
the listener gets a list of events and checkpoint will move pointer fwd. for full batch.

There is no good way to replay Events using EventProcessorHost. User is expected to handle the failures in user code (ProcessEvents code).
If there is a poison event in the system and the EventProcessorHost cannot proceed to execute and want to bail out - the only way to achieve this is to checkpoint until a known good Event and unregister the EventProcessorHost or kill the process.
You can control until which exact event you want to checkpoint using the PartitionContext.Checkpoint(EventData) API.

Related

Even hub checkpoint as resiliency mechanism

I am reading data and processing it further. if processing fails, I will not call checkpoint function. I hope that not checkpointing will stop further processing of events until issue is fixed. Is checkpointing sufficient for resiliency or I need to implement something like dead blob processor to provide failure handling?
Is checkpointing sufficient for resiliency
No not really. The processing will continue until the process hosting the processing logic stops. Say for example you have an azure funtion processing the message this will go on.
What does happen that in the event of a function restart (or whatever process you use to handle event hub messages) the function will start processing messages from the moment of the last checkpoint. Probably not what you want because the messages that caused a failure in the past will be processed again and probably fail again.
There is no dead lettering or retry mechanism out of the box, you will need to define that logic yourself.
So, TL;DR: the checkpoint its only purpose is to tell the processing logic where to start processing message from the backlog.

IoT lifecycle events handling

What is the best practice to check if AWS IoT Core thing is still offline?
Being able to query the state of an AWS IoT thing will for many be an essential part of their application. Lucky AWS has a best practise on how to get lifecycle events here: https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html
It says that we should check if device is still offline, before performing any actions.
I'm handling it on nodeJs server (listening to events), so the question is, what's the best way to handle it?
For now the plan is, to create some storage (redis?), and implement some timeout(5-10 sec), if I received disconnect event, I'll put it in DB, wait timeout, and if no other messages regarding this device will come (Connected), I'll do some logic.
Is this right approach?
The point is, not to use SQS from aws.
And as AWS docs says, the order of messages is not guaranteed, so what's the best practise to handle it?)
If your device emits a signal at every periodic intervals, then you can treat that as a heartbeat signal.
You can maintain a timer (x minutes/hours etc) and wait for the heartbeat signal from the device.
If the timer times out and you have not received the hearbeat signal, then it is safe to assume that the device has gone offline. Such events are easy to model as a detector model in the IoT Events.
This example from AWS IoT Events is doing exactly the same thing.

How to wait for Akka Persistent Actor to persistAll?

I want to send a reply after I have persisted and updated the state of the actor using persistAll. Unfortunately I have not found a callback or onSucces handler to send back a reply after the last event has been persisted.
This is a shortcoming of the API, there is no built in way to react on all persistAll completing, you will have to keep a counter or a set of completed persists yourself and only trigger your logic when the last persist completes.
As far as I remember this cannot be easily fixed because it would break binary and source compatibility.
In the "next generation" persistent actors (in Akka typed) this works more as you would expect and the side effect you want to execute on successful persist of the events will only execute once, when all the events are complete.

How do I notify the client application when a chaincode is invoked?

When a chaincode is invoked, is there a way to call a REST API (external) so that the client application can be notified on the new transaction.
Apart from REST, is there any other option?
It's better to use events
https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#35-events
Validating peers and chaincodes can emit events on the network that
applications may listen for and take actions on. There is a set of
pre-defined events, and chaincodes can generate custom events. Events
are consumed by 1 or more event adapters. Adapters may further deliver
events using other vehicles such as Web hooks or Kafka.
Application can subscribe for events stream from Fabric and listen for messages generate by your chaincode.
An example for how to work with Events can be found here:
https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener
To add to Sergey's answer, there are 3 types of events.
BLOCK EVENTs, which are created when the ledger changes.
REJECTION EVENTs, which are created when any error occur( either in user chain code or in system chain code )
CHAINCODE EVENTs, which are user handles which lets user chain code create events. [ Weird thing I noticed is, only one CHAINCODE EVENT per invoke is allowed as per current design ]
You can have an event listener/client running at your end, listening on the gRPC port, ( you can get the port from the core.yaml file ) Or you can even refer to the example Sergey has mentioned.
In your case, I am guessing that you are looking for a successful transaction. In that case, you should listen on BLOCK events and REJECTION Events. The Transaction UUID which you received when your invoke was triggered, can be used to scan the events and trigger an action when it matches.
Also note that if a transaction results in REJECTION EVENT, then it would not have a BLOCK EVENT.
Hope this helps.

WSO2 CEP output event adaptor error

I am new to WSO2 CEP
I have created the entire flow to read the JMS message and split it using Text formatter. The problem is that when I try to push messages into the queue, it is not able to reach the the output event adaptor. I have a mysql event adaptor and configured it into my event formatter but I keep getting the below message in my log
[2014-02-13 21:20:06,347] ERROR - {ReceiverGroup} No receiver is reachable at reconnection, can't publish the events
[2014-02-13 21:20:06,352] ERROR - {AsyncDataPublisher} Reconnection failed for for tcp://localhost:7661
Can someone help me understand what is this tcp://localhost:7661 is all about
Regards
Subbu
tcp://localhost:7661 is the default port to which Thrift(WSO2 events are published). It seems a default event formatter has been created and trying to publish events to that port.
Can you check your list of event formatters and ensure that no event formatters of type WSO2Event are created. This event formatter might be automatically created if you set an exported stream to be 'pass-through' when creating the execution plan.
You can enable event tracing[1] and monitor to determine exactly upto which point the event is coming in your configured flow.
[1] http://docs.wso2.org/display/CEP300/CEP+Event+Tracer
HTH,
Lasantha