wso2 stream processor: flow architecture - wso2

I am looking into wso2 cep and I try to understand the right architecture for sensor data.
In my case we receive well formed data. The entity looks like this:
private UUID sensorId;
private Double value;
private Long timeInNanoSeconds;
This data needs to be processed, there are alarms that can be set on sensors / values. There are custom calculations, where the value of sensorId X is added to sensorId Y.
Is the approach correct to create a sensorStream and send all sensor data from all devices / sensors to this flow?

Related

How do I query an Alexa device for a parameter?

I am creating a custom skill that will query a custom device for a parameter. In this case, it is the voltage.
The device is a node on node red so it is not a physical device but a virtual one.
The device will be linked to my account. Here is what I am thinking that the workflow would be:
Hello alexa ask test app what is the motor voltage?
I get a session request that goes to my custom intent and executes the corresponding function on the lambda server
----- here is the part that is fuzzy ----
Using some device ID, the lambda server sends out a request to the virtual device
The node for the device gets this request (mostly likely some sort of JSON object), parses it and sends back out the requested parameter that is stored on the Node Red server (For the sake of the discussion lets say that it is a constant number that is on the server)
Lambda server gets this response and forwards it to the Alexa service
Alexa - The voltage of the motor is twelve volts
So basically, how do I do this? Is this the correct workflow for Alexa or is there a different workflow? What components (besides the components needed for Alexa to run) will I be needing? I believe that I can get the ID of the device in the handler_interface.

REST API for monitoring undelivered message in google cloud pubsub

I want to implement a service to monitor an undelivered messages and send notification when it reach threshold or process further.
I already look through the Stackdriver. It provide me the monitoring and alert that It only provide the API to get the metricDescriptor but it does not provide an API to get the undelivered message as you can see in Stackdriver Monitoring API.
Is there actually an provided API to get the metrics value?
You can get the values via the projects.timeSeries.list method. You would set the name to projects/<your project>, filter to metric.type = "pubsub.googleapis.com/subscription/num_undelivered_messages", and end time (and if a range of values is desired, the start time as well) to a string representing a time in RFC3339 UTC "Zulu" format, e.g., 2018-10-04T14:00:00Z. If you want to look at a specific subscription, set the filter to metric.type = "pubsub.googleapis.com/subscription/num_undelivered_messages" AND resource.label.subscription_id = "<subscription name>".
The result will be one or more TimeSeries types (depending on whether or not you specified a specific subscription) with the points field including the data points for the specified time range, each of which will have the value's int64Value set to the number of messages that have have not been acknowledged by subscribers.

kafka for consuming subscription data sent by a webservice

Scenario:
I have a north bound web service(say A) and a south bound application(say C).
And I am creating a micro service (say B) which transforms data received by A in a format that is readable by C.
A can send data any random time intervals, and as B receives it, it has to transform.
What I think :
Once B subscribes to A, and A starts sending data via a callback url. Save the data in mongodb and after some interval process the data and push to C.
Question:
1. Since A is streaming a type of data to B, Can I use KAFKA for consuming the data ?
2. If no, what are the other alternatives?
3. I want to know, is there any other efficient way of doing this ?
What you are describing sounds to me like a typical stream transformation application. You can definitely use Kafka for this, but you can probably also use any other pub/sub service out there.
Setting up Kafka just for this seems like overkill and I would rather look into simpler pub/sub services offered online.

Howto override the event timestamp set by WSO2DAS

Currently WSO2 Data Analytic Server set the current timestamp to every event received using the available APIs. Is there a way to pass the timestamp value on the event data over the APIs in order to sent historical events to DAS?
From DAS 3.1.0 RC 1 onwards this can be achieved. You can follow the steps below to try it out.
Download DAS 3.1.0 RC1 from here.
Create an event stream with your payload and also add an attribute named _timestamp and set the attribute type as long.
Persist the Event selecting your payload attributes. (Please note that you will not be able to select the _timestamp attribute so leave it as it is)
Now simulate a event by providing your payload data along with the _timestamp epoch e.g - 1450206041000. The data explorer will show you an event received in 2015-12-16 00:30:41

How to mentioned PartitionKey -- ConsumerGroup binding in EventHub, Azure

I want to integrate my application through Event Hub with multiple type of devices like Mobile app, different type of Embedded system etc. All different type of senders sending data in their specific format and they need their specific handler as well. Like shown below
Mobile APP (Partition key “MobileAPP”) = Consumer Group 1
Embedded System 1 (Partition key “Embedded1”) = Consumer Group 2
Embedded System 2 (Partition key “Embedded2”) = Consumer Group 2
So can you please tell me how I should specify above binding in Event Hub implementation so that each type of message should handle by their particular consumer group?
Normally I see on Receiver side only default consumer group name mentioned. But I can during EventProcessorHost implementation we can create new Consumergroup with method namespaceManager.CreateConsumerGroupIfNotExists(ehd.Path, consumerGroupName). But not able to understand how I make sure that all messages that associate to particular partition key will be handling by their associate consumer group. Where should I mentioned their PartitionKey, ConsumerGroup binding.
In short, there is no straight forward way to specify PartitionKey to ConsumerGroup binding.
Here's why:
EventHubs is a high throughput durable stream which offers stream-level semantics.
Simply put, imagine it to be equivalent to a simple in-Memory stream where you get a Cursor on the Stream (using EventHubClient ReceiveByOffset or ReceiveByTimeStamp Api's) and call ReadNext() to get next events. If you want that such a stream to hold events at huge scale - 1 day's worth of data - and you want it to be persistent (cases where even if your app, processing the Stream, crashes, you don't want to loose data) - that's when you need EventHub.
Now coming to your Question, the feature you are for - is to Filter events based on a Property on the Event - which is not a Stream level Operation - but an Event level operation.
Typical approach to implement it yourself - is to Pull events from EventHubs (the event stream) and have a worker to Process (in your case, Filter by PartitionKey) events and push them to individual Queues (or you could even partition your data to push a group of devices to Topics and have subscriptions which pulls data off - with filters).
Now, first question to answer before you decide on using EventHubs is : Do you foresee the Scale requirements offered by EventHubs vs "Directly using ServiceBus Topics" which provides the exact Semantics you are looking for.
HTH!
Sree