how to control max events in azure event hub listner batch - azure-eventhub

I am trying to implement an event hub received and listener interface is :
ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
Is there a way to control the max number of items in messages ?

EventProcessorOptions.MaxBatchSize is your friend.

Related

Visibility timeout in Amazon SQS not working

How do I configure visibility timeout so that a message in SQS can be read again?
I have Amazon SQS as a message queue. Messages are being sent by multiple applications. I am now using Spring listener to read message in queue as below:
public DefaultMessageListenerContainer jmsListenerContainer() {
SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain())
.withEndpoint(environment.getProperty("aws_sqs_url"))
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10).build();
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(environment.getProperty("aws_sqs_queue"));
dmlc.setMessageListener(queueListener);
return dmlc;
}
The class queueListener implements javax.jms.MessageListener which uses onMessage() method further.
I have also configured a scheduler to read the queue again after a certain period of time. It uses receiveMessage() of com.amazonaws.services.sqs.AmazonSQS.
As soon as message reach the queue the listener reads the message. I want to read the message again after certain period of time i.e. through scheduler, but once a message is read by listener it does not become visible or read again. As per Amazon's SQS developer guide the default visibility timeout is 30 seconds, but that message is not becoming visible even after 30 seconds. I have tried setting custom visibility timeout in SQS QUEUE PARAMETER CONSOLE, but it's not working.
For information, nobody is deleting the message from the queue.
I only have a passing familiarity with Amazon SQS, but I can say that typically in messaging use-cases when a consumer receives and acknowledges the message then that message is removed (i.e. deleted) from the queue. Given that your Spring application is receiving the message I would suspect it is also acknowledging the message and therefore removing it from the queue which prevents your scheduler from receiving it later. Note that Spring's DefaultMessageListenerContainer uses JMS' AUTO_ACKNOWLEDGE mode by default.
This documentation from Amazon essentially states that if a message is acknowledged in a JMS context that it is "deleted from the underlying Amazon SQS queue."

Azure Event hub C# batch sender

Is there an option to send event in batch to Event hub from C#.
the SendBatch API mentioned doesn't exist in new EventHubClient.
There is no longer a send batch call, but there is a SendAsync method with an IEnumerable overload that is effectively the same thing.
https://learn.microsoft.com/dotnet/api/microsoft.azure.eventhubs.eventhubclient#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_

How do I notify the client application when a chaincode is invoked?

When a chaincode is invoked, is there a way to call a REST API (external) so that the client application can be notified on the new transaction.
Apart from REST, is there any other option?
It's better to use events
https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#35-events
Validating peers and chaincodes can emit events on the network that
applications may listen for and take actions on. There is a set of
pre-defined events, and chaincodes can generate custom events. Events
are consumed by 1 or more event adapters. Adapters may further deliver
events using other vehicles such as Web hooks or Kafka.
Application can subscribe for events stream from Fabric and listen for messages generate by your chaincode.
An example for how to work with Events can be found here:
https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener
To add to Sergey's answer, there are 3 types of events.
BLOCK EVENTs, which are created when the ledger changes.
REJECTION EVENTs, which are created when any error occur( either in user chain code or in system chain code )
CHAINCODE EVENTs, which are user handles which lets user chain code create events. [ Weird thing I noticed is, only one CHAINCODE EVENT per invoke is allowed as per current design ]
You can have an event listener/client running at your end, listening on the gRPC port, ( you can get the port from the core.yaml file ) Or you can even refer to the example Sergey has mentioned.
In your case, I am guessing that you are looking for a successful transaction. In that case, you should listen on BLOCK events and REJECTION Events. The Transaction UUID which you received when your invoke was triggered, can be used to scan the events and trigger an action when it matches.
Also note that if a transaction results in REJECTION EVENT, then it would not have a BLOCK EVENT.
Hope this helps.

WSO2 CEP output event adaptor error

I am new to WSO2 CEP
I have created the entire flow to read the JMS message and split it using Text formatter. The problem is that when I try to push messages into the queue, it is not able to reach the the output event adaptor. I have a mysql event adaptor and configured it into my event formatter but I keep getting the below message in my log
[2014-02-13 21:20:06,347] ERROR - {ReceiverGroup} No receiver is reachable at reconnection, can't publish the events
[2014-02-13 21:20:06,352] ERROR - {AsyncDataPublisher} Reconnection failed for for tcp://localhost:7661
Can someone help me understand what is this tcp://localhost:7661 is all about
Regards
Subbu
tcp://localhost:7661 is the default port to which Thrift(WSO2 events are published). It seems a default event formatter has been created and trying to publish events to that port.
Can you check your list of event formatters and ensure that no event formatters of type WSO2Event are created. This event formatter might be automatically created if you set an exported stream to be 'pass-through' when creating the execution plan.
You can enable event tracing[1] and monitor to determine exactly upto which point the event is coming in your configured flow.
[1] http://docs.wso2.org/display/CEP300/CEP+Event+Tracer
HTH,
Lasantha

Distributed WSO2 CEP

If I have a distributed CEP setup with a JMS broker as the primary input.
Now if we tell our client application to send event to Topic X, the events will be distributed to each node in the CEP cluster, as each one will be listening on same Topic X.
Will this lead to duplication of results (lets say if I am counting certain data field, now since each node is receiving duplicate data, will my count be double of actual value if I have a 2 node cluster)
Can the CEP work off a JMS Queue instead of a Topic ? This way which ever node gets the event data first will consume the message off the Queue ? Does WSO2 CEP support JMS Queues ?
No, currently (CEP 2.0.1) does not have support to receive events from JMS queue.
But if this is your requirement then you can write your own CEP addeptor(broker) to receive events from a queue and push that to CEP.
To create a custom broker
Create an appropriate Broker Type by extending org.wso2.carbon.broker.core.BrokerType and an appropriate Broker Type Factory by extending the org.wso2.carbon.broker.core.BrokerTypeFactory from the jar org.wso2.carbon.broker.core-4.0.5.jar
Then to configure that broker with the CEP create a file called "broker.xml" at wso2cep-2.0.1/repository/conf
and add the following XML:
<brokerTypes xmlns="http://wso2.org/carbon/broker">
<brokerType class="<<class reference>>" /> ...
</brokerTypes>
Find a detail documentation on creating a custom broker at http://suhothayan.blogspot.com/2013/02/writing-custom-broker-for-wso2-cep.html