It is possible to detect unordered event patterns with WSO2? - wso2

I would like to detect some patterns using wso2, but my current solution is only capable to detect them if the events arrived are consecutives.
Let's suppose the following pattern:
Event 1: Scanning Event from Source 1 to Target 2
Event 2: Attempt Exploit from Source 1 to Target 2
That would generate an Alert.
But in a real world scenario, the events won't come in order, there are too many computers in an enterprise.
There is a way to be able to detect the previous pattern with the following event sequence?
Event 1: Scanning Event from Source 1 to Target 2
Event 2: Not relevant
Event 3: Not relevant
...
Event N: Attempt Exploit from Source 1 to Target 2
My Current code is:
from every (e1=Events) -> e2=Events
within 10 min
select ...
having e1.type=='Scan' and e2.type=='attack' and e1.Source_IP4==e2.Source_IP4
insert into Alert;
I've also tried other kind of solutions like
from every e1=Events,e2=Events[Condition]
within 10 min
select ...
having e1.type=='Scan' and e2.type=='attack' and e1.Source_IP4==e2.Source_IP4
insert into Alert;
Maybe it could be done with a Partition? Partiotionate the streams taking into account the Source_IP4

I've finally made it.
The problem was to use "having" to detect the pattern, It has to be moved to the "filter condition" section instead.
from (every)? <event reference>=<input stream>[<filter condition>] ->
(every)? <event reference>=<input stream [<filter condition>] ->
...
(within <time gap>)?
select <event reference>.<attribute name>, <event reference>.<attribute name>, ...
insert into <output stream>
Solution:
from every (e1=Events) -> e2=Events[e1.type=='Scan' and type=='attack' and e1.Source_IP4==Source_IP4]
within 10 min
select ...
insert into Alert;

Related

Datadog alarm based on multiple thresholds

I am struggling to see if this is at all possible. If I have 2 queries:
A: avg:metric_one{service:foo}.as_count()
B: avg:metric_two{service:foo}.as_count()
And a FUNC (a/b)*100
I'd like a simple alarm that triggers when:
FUNC < 70 && A > 10
However, there seems to not be any option to put 2 critiria in. Any advice?
Thanks
For this use case, create a Composite Monitor. With composites you can define your triggering conditions based on the combined status of multiple monitors.

Override field in the input before passing to the next state in AWS Step Function

Say I have 3 states, A -> B -> C. Let's assume inputs to A include a field called names which is of type List and each element contains two fields firstName and lastName. State B will process the inputs to A and and return a response called newLastName. If I want to override every element in names such that names[i].lastName = newLastName before passing this input to state C, is there an built-in syntax to achieve that? Thanks.
You control the events passed to the next task in a Step Function with three defintion attributes: ResultPath and OutputPath on leaving one task and InputPath on entering the next one.
You have to first understand how the event to the next task is crafted by a State Machine, and each of the 3 above parameters changes it.
You have to at least have Result Path. This is the key in the event that the output of your lambda will be placed under. so ResultPath="$.my_path" would result in a json object that has a top level key of my_path with the value equal to whatever is outputted from the lambda.
If this is the only attribute, it is tacked onto whatever the input was. So if your Input event was a json object with keys original_key1 and some_other_key your output with just the above result path would be:
{
"original_key_1": some value,
"some_other_key": some other value,
"my_path": the output of your lambda
}
Now if you add OutputPath, this cuts off everything OTHER than the path (AFTER adding the result path!) in the next output.
If you added OutputPath="$.my_path" you would end up with a json of:
{ output of your lambda }
(your output better be a json comparable object, like a python dict!)
InputPath does the same thing ... but for the Input. It cuts off everything other than the path described, and that is the only thing sent into the lambda. But it does not stop the input from being appeneded - so InputPath + ResultPath results in less being sent into the lambda, but everything all together on the exit
There isn't really a loop logic like the one you describe however - Task and State Machine definitions are static directions, not dynamic logic.
You can simply handle it inside the lambda. This is kinda the preferred method. HOWEVER if you do this, then you should use a combination of OutputPath and ResultPath to 'cut off' the input, having replaced the various fields of the incoming event with whatever you want before returning it at the end.

Onyx: Can't pick up trigger/emit results in the next task

I'm trying to get started with Onyx, the distributed computing platform in Clojure. In particular, I try to understand how to aggregate data. If I understand the documentation correctly, a combination of a window and a :trigger/emit function should allow me to do this.
So, I modified the aggregation example (Onyx 0.13.0) in three ways (cf. gist with complete code):
in -main I println any segments put on the output channel; this works as expected with the original code in that it picks up all segments and prints them to stdout.
I add an emit function like this:
(defn make-ds
[event window trigger {:keys [lower-bound upper-bound event-type] :as state-event} extent-state]
(println "make-ds called")
{:ds window})
I add a trigger configuration (original dump-words trigger emitted for brevity):
(def triggers
[{:trigger/window-id :word-counter
:trigger/id :make-ds
:trigger/on :onyx.triggers/segment
:trigger/fire-all-extents? true
:trigger/threshold [5 :elements]
:trigger/emit ::make-ds}])
I change the :count-words task to from calling the identity function to the reduce type, so that it doesn't hand over all input segments to the output (and added config options that onyx should tackle this as a batch):
{:onyx/name :count-words
;:onyx/fn :clojure.core/identity
:onyx/type :reduce ; :function
:onyx/group-by-key :word
:onyx/flux-policy :kill
:onyx/min-peers 1
:onyx/max-peers 1
:onyx/batch-size 1000
:onyx/batch-fn? true}
When I run this now, I can see in the output that the emit function (i.e. make-ds) gets called for each input segment (first output coming from the dump-words trigger of the original code):
> lein run
[....]
Om -> 1
name -> 1
My -> 2
a -> 1
gone -> 1
Coffee -> 1
to -> 1
get -> 1
Time -> 1
make-ds called
make-ds called
make-ds called
make-ds called
[....]
However, the segment build from make-ds doesn't make it through to the output-channel, they are never being printed. If I revert the :count-words task to the identity function, this works just fine. Also, it looks as if the emit function is called for each input segment, whereas I would expect it to be called only when the threshold condition is true (i.e. whenever 5 elements have been aggregated in the window).
As the test for this functionality within the Onyx code base (onyx.windowing.emit-aggregate-test) is passing just fine, I guess I'm making a stupid mistake somewhere, but I'm at a loss figuring out what.
I finally saw that there was a warning in the log file onxy.log like this:
[clojure.lang.ExceptionInfo: Windows cannot be checkpointed with ZooKeeper unless
:onyx.peer/storage.zk.insanely-allow-windowing? is set to true in the peer config.
This should only be turned on as a development convenience.
[clojure.lang.ExceptionInfo: Handling uncaught exception thrown inside task
lifecycle :lifecycle/checkpoint-state. Killing the job. -> Exception type:
clojure.lang.ExceptionInfo. Exception message: Windows cannot be checkpointed with
ZooKeeper unless :onyx.peer/storage.zk.insanely-allow-windowing? is set to true in
the peer config. This should only be turned on as a development convenience.
As soon as I set this, I finally got some segments handed over to the next task. I.e., I had to change the peer config to:
(def peer-config
{:zookeeper/address "127.0.0.1:2189"
:onyx/tenancy-id id
:onyx.peer/job-scheduler :onyx.job-scheduler/balanced
:onyx.peer/storage.zk.insanely-allow-windowing? true
:onyx.messaging/impl :aeron
:onyx.messaging/peer-port 40200
:onyx.messaging/bind-addr "localhost"})
Now, :onyx.peer/storage.zk.insanely-allow-windowing? doesn't sound like a good thing to do. Lucas Bradstreet recommended on the Clojurians Slack channel switching to S3 checkpointing.

Writing an "arrived" and "departed" query with siddhi using timeouts

I'm looking to replace Esper with Siddhi in my application. The esper statement right now is a "timeout" type pattern, where I need to report back when events of a unique "name" and "type" (just string values I can look for on the incoming events) arrive and depart. I know that the event has arrived when the event first arrives in my firstunique window, and I assume the event departs if I don't see any events of the same name and type in a user-defined timeout value. Here's what my esper statements look like(note that there's a lot more going on in the actual esper, I've just simplified this for the sake of example):
create window events_1.std:firstunique(name, type) as NameEvent
insert into events_1 select * from EventCycle[events]
on pattern [every event1=events_1->(timer:interval(4.0 sec) and not events_1(name=event1.name, type=event1.type))]delete from events_1 where name = event1.name AND type=event1.type
I then select the irstream from events_1 and by getting the incoming and removed events I then get the "arrived" and "departed" events from the window.
For the siddhi, the firstunique window is fairly straightforward (I think?):
from EventCycle#window.firstUnique('name')[ type=='type' ] select name, type insert into NameEvent
but I'm really drawing a blank on how to replace that esper "on pattern" with Siddhi. Can I use a single "from every" statement for this or will I need a different approach with Siddhi?
Any help setting me on the right path here will be appreciated!
One way of achieving your requirement, is by checking the non-occurrence of an event.
I'm afraid, AFAIK, non-occurrence check is not supported in WSO2 CEP-3.1.0.
However it is supported in WSO2 CEP-4.0.0 (yet to be released as I'm writing this on 24th Aug 2015).
You may refer to non-occurrence detection sample [1].
Explanation:
Here we depart the first event if no unique event has occured 4 seconds (which is the timout) later since the latest unique event has occured.
So it appears like we need to check the non-occurance of an event.
In CEP 4.0.0, you could achieve your requirement as follows:
from EventCycle#window.firstUnique(name)[ type=='type' ]
select name, type
insert into NameEvents; -- Note: I renamed NameEvent in the question to NameEvents
-- After seeing the latest unique event (Query-A), 4 seconds later (Query-B), we're checking if no unique event has occured in between (Query-C and Query-D).
-- So, we're checking the non-occurance of an event here... See link [1] for a sample.
--Query-A
from EventCycle#window.unique(name)[ type=='type' ]
select name, type
insert into latestEvents;
-- Query-B
from latestEvents#window.time(4 seconds) -- Here, I've taken 4 seconds as the timeout.
select *
insert expired events into timedoutEvents;
-- Query-C
from every latestEvent = latestEvents[ type=='type' ] ->
keepAliveEvent = latestEvents[ latestEvent.name == keepAliveEvent.name and type=='type' ]
or timedoutEvent = timedoutEvents[ latestEvent.name == timedoutEvent.name and type=='type' ]
select latestEvent.name as name, keepAliveEvent.name as keepAliveName
insert into filteredEvents;
-- Query-D
from filteredEvents [ isNull(keepAliveName)]
select name
insert into departedLatestEvents;
-- Since we want the name from the NameEvents stream, we're joining it with the departedLatestEvents stream
from departedLatestEvents#window.length(1) as departedLatestEvent join
NameEvents#window.length(1) as NameEvent
on departedLatestEvent.name == NameEvent.name -- not checking type as both departedLatestEvents and NameEvents have events only with type 'type'
select NameEvent.name as name, 'type' as type
insert into departedFirstEvents;
Link referred in the code sample:
1 https://docs.wso2.com/display/CEP400/Sample+0111+-+Detecting+non-occurrences+with+Patterns
Hope this helps!

Composing Flow Graphs

I've been playing around with Akka Streams and get the idea of creating Flows and wiring them together using FlowGraphs.
I know this part of Akka is still under development so some things may not be finished and some other bits may change, but is it possible to create a FlowGraph that isn't "complete" - i.e. isn't attached to a Sink - and pass it around to different parts of my code to be extended by adding Flow's to it and finally completed by adding a Sink?
Basically, I'd like to be able to compose FlowGraphs but don't understand how... Especially if a FlowGraph has split a stream by using a Broadcast.
Thanks
The next week (December) will be documentation writing for us, so I hope this will help you to get into akka streams more easily! Having that said, here's a quick answer:
Basically you need a PartialFlowGraph instead of FlowGraph. In those we allow the usage of UndefinedSink and UndefinedSource which you can then"attach" afterwards. In your case, we also provide a simple helper builder to create graphs which have exactly one "missing" sink – those can be treated exactly as if it was a Source, see below:
// for akka-streams 1.0-M1
val source = Source() { implicit b ⇒
// prepare an undefined sink, which can be relpaced by a proper sink afterwards
val sink = UndefinedSink[Int]
// build your processing graph
Source(1 to 10) ~> sink
// return the undefined sink which you mean to "fill in" afterwards
sink
}
// use the partial graph (source) multiple times, each time with a different sink
source.runWith(Sink.ignore)
source.runWith(Sink.foreach(x ⇒ println(x)))
Hope this helps!