WSO2 CEP window time 1 day - wso2

I am evaluating different possible solution for ATM Card transaction fraud detection with input load of around 50000 per second and response time few seconds.
WSO2 CEP looks better fit for overall solution, but stuck with problem of memory & performance as I am new to WSO2 CEP so please suggest if there is better way of doing below in CEP WSO2/CEP.
in order to detect fraud we have to capture data aggregation over a time period of 1 day which is causing either memory overflow or performance hit.
1) Below causing out of memory as CEP tries to keep all record in memory for whole day
from instream#window.time(1 day)
select sum(amount) as totalAmt;
2) Below causing performance hit as it tries to load all records from table for doing some.
define table instream_table (....) from ( datasource,table,cache policy) ;
from instream#window.length(1) join instream_table
on instream.card_id==intable.card_id
select sum(instream_table.amount) as totalAmt;
worst thing I have noticed that CEP fires select * from instream_table instead of adding even where clause for card_id, Where I was expecting CEP to intelligent enough to fire select sum(amount) from instream_table where card_id=xxxxx
i have looked at documentation for window in WSO2 CEP but could not find any way to optimize this as it looks like WSO2 CEP tries to everything in memory.
let me know if there is any work around or better solution to achieve this. I have looked at other CEP engines like esper but seems every body is doing this in same manner.

It seems a known bug in event tables.. I have created a jira to track that issue in [1] and added a patch for that issue as well.. Will fix that issue in next release..
[1] https://wso2.org/jira/browse/CEP-866
Thanks..

Related

Anomaly detection in production

I am trying to search for suggestions and solutions, but I am unable to find any.
After reading blogs, I am able to build a time series anomaly detection using BigQuery ML (Arima Plus).
My question is: how do I put such a model in production?
Probably I need to:
program the re-training of the model every X days
check whether there are new anomalies on the object table every X hours
record those anomalies in another table
But I also accept other suggestion on how to proceed.
Is there anyone out there that can give me any hint?
Thank you!
The best way I found is to create "scheduled queries":
schedule a query for re-training of the model every X days:
CREATE OR REPLACE MODEL mymodel
OPTIONS( model_type='arima_plus',
TIME_SERIES_DATA_COL='events',
TIME_SERIES_TIMESTAMP_COL='approx_hour',
HOLIDAY_REGION = 'GLOBAL',
CLEAN_SPIKES_AND_DIPS = FALSE,
DECOMPOSE_TIME_SERIES=TRUE)
AS (SELECT 
TIMESTAMP_TRUNC( PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ',start_time), hour) as approx_hour, 
COUNT(1) AS events 
FROM  `mytable`
GROUP BY approx_hour);
 
schedule a query to perform anomaly detection on the latest events, and eventually write them on a table:
insert into `events_anomalies_table`
SELECT approx_hour as hour,
cast(events as int64) as actual_events,
cast(lower_bound as int64) as expected_min_events,
cast(upper_bound as int64) as expected_max_events,
current_timestamp() as execution_timestamp
FROM ML.DETECT_ANOMALIES(
MODEL`my_model`,
STRUCT (0.98 AS anomaly_prob_threshold),
( SELECT
TIMESTAMP_TRUNC( PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ',start_time), hour) as approx_hour, 
COUNT(1) AS events 
FROM  `my_table`
WHERE PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ',start_time) > TIMESTAMP_SUB(CURRENT_TIMESTAMP() , INTERVAL 1 HOUR)
GROUP BY approx_hour
LIMIT 1))
WHERE is_anomaly = True

BigQueryIO - only first day table can be created, despite having CreateDisposition.CREATE_IF_NEEDED

I have a dataflow job processing data from pub/sub defined like this:
read from pub/sub -> process (my function) -> group into day windows -> write to BQ
I'm using Write.Method.FILE_LOADS because of bounded input.
My job works fine, processing lots of GBs of data but it fails and tries to retry forever when it gets to create another table. The job is meant to run continuously and create day tables on its own, it does fine on the first few ones but then gives me indefinitely:
Processing stuck in step write-bq/BatchLoads/SinglePartitionWriteTables/ParMultiDo(WriteTables) for at least 05h30m00s without outputting or completing in state finish
Before this happens it also throws:
Load job <job_id> failed, will retry: {"errorResult":{"message":"Not found: Table <name_of_table> was not found in location US","reason":"notFound"}
It is indeed a right error because this table doesn't exists. Problem is that the job should create it on its own because of defined option CreateDisposition.CREATE_IF_NEEDED.
The number of day tables that it creates correctly without a problem depens on number of workers. It seems that when some worker creates one table its CreateDisposition changes to CREATE_NEVER causing the problem, but it's only my guess.
The similar problem was reported here but without any definite answer:
https://issues.apache.org/jira/browse/BEAM-3772?focusedCommentId=16387609&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16387609
ProcessElement definition here seems to give some clues but I cannot really say how it works with multiple workers: https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L138
I use 2.15.0 Apache SDK.
I encountered the same issue, which is still not fixed in BEAM 2.27.0 of january 2021. Therefore I had to develop a workaround: a custom PTransform which checks if the target table exist before the the BigQueryIO stage. It uses the bigquery java client for this and a Guava cache, as well as a windowing strategy (fixed, check every 15s) to sustain a heavy traffic of about 5000 elements per second. Here is the code: https://gist.github.com/matthieucham/85459eff5fdea8d115be520e2dd5ccc1
There was a bug in the past that caused this error, but that particular one was fixed in commit https://github.com/apache/beam/commit/d6b4dcec5f297f5c1bd08f345f0e1e5c756775c2#diff-3f40fd931c8b8b972772724369cea310 Can you check to see if the version of Beam you are running includes this commit?

Cloud Dataflow - Heap Space error while using PcollectionList

I have to partition the data by a date field in it. I am doing it using Partition Transform.
When I divide yearly data by month, Partition returns a Pcollectionlist which has 12 pcollection. This works fine.
When I have to divide it by day. I will have to create 1*12*31 Pcollection in PcollectionList. This throughs Heap space error. I tried only for 2 months data. That is,
a PcollectionList of 2*31 Pcollection
I tried using n1-highmem-4 and n1-highmem-8 machines with more than 10 workers. Still it throughs Heap space error. I am testing with only 2.0 MiB file. So I believe data size should not be a problem. The screen shots are below.
Please help me to fix this. Or a work around to my solution is also most welcome.
Thanks in advance.
It sounds like you're trying to get time-based divisions of your data. Have you looked at windowing? It should allow you to do monthly/daily/hourly windowing without needing to perform the partition. If windowing isn't applicable, could you explain why you need to partition by day?
How are you consuming the partitioned results? You may be running into a known bug with pipelines with many sinks running into OOM errors due to the byte buffers for each of the sinks.

Complex Event Processor 2.1.0 input streams

I'm new at the CEP 2.1 and my question is related to time-frame that tha CEP hold on to the input stream
let say that you regularly send data to some input stream let's say "HELLOSTREAM".
for how long does the CEP save the inputs to the stream what is the max time etc...
let say if I send data every day for 365 days will I get back all data on the 366 day or will he truncate the data at some point( will hold ony last 100 days) ? no matter what time-window I set in the query ?
is there a limit ?
CEP is a real-time processing server. It is used to find pre-defined pattern in real-time and for realtime monitoring. It keeps the data in memory and process the events but you can persist the data to cassandra for distributed processing...
Here data will be keep in the memory based on the window size that you defined, It depends on the window type that you are using and time or length given to that window... If you are not using any window it will not keep any data in memory...
If you want to store data for 365 days or 100 days, then it is not a real-time use-case. For that you have to use offline processing server like BAM.
To add to #Mohanadarshan's answer, if what you want is to extract and store some values from a fast event stream over a long period, a better CEP-based approach will be to use persistent event tables (which will be included in the upcoming CEP version 3.0.0 which will be released soon). This way you'll be able to do real-time processing against some extracted and persisted data. But as #Mohanadarshan has mentioned, if your requirement is batch processing (and if you do not need to detect anything real-time), WSO2 BAM will be a better option.
Using sliding windows over a very long period to store large amounts of data is not a good idea as they are stored in memory and also you'll loose data if server goes down.

Amazon SimpleDB Woes: Implementing counter attributes

Long story short, I'm rewriting a piece of a system and am looking for a way to store some hit counters in AWS SimpleDB.
For those of you not familiar with SimpleDB, the (main) problem with storing counters is that the cloud propagation delay is often over a second. Our application currently gets ~1,500 hits per second. Not all those hits will map to the same key, but a ballpark figure might be around 5-10 updates to a key every second. This means that if we were to use a traditional update mechanism (read, increment, store), we would end up inadvertently dropping a significant number of hits.
One potential solution is to keep the counters in memcache, and using a cron task to push the data. The big problem with this is that it isn't the "right" way to do it. Memcache shouldn't really be used for persistent storage... after all, it's a caching layer. In addition, then we'll end up with issues when we do the push, making sure we delete the correct elements, and hoping that there is no contention for them as we're deleting them (which is very likely).
Another potential solution is to keep a local SQL database and write the counters there, updating our SimpleDB out-of-band every so many requests or running a cron task to push the data. This solves the syncing problem, as we can include timestamps to easily set boundaries for the SimpleDB pushes. Of course, there are still other issues, and though this might work with a decent amount of hacking, it doesn't seem like the most elegant solution.
Has anyone encountered a similar issue in their experience, or have any novel approaches? Any advice or ideas would be appreciated, even if they're not completely flushed out. I've been thinking about this one for a while, and could use some new perspectives.
The existing SimpleDB API does not lend itself naturally to being a distributed counter. But it certainly can be done.
Working strictly within SimpleDB there are 2 ways to make it work. An easy method that requires something like a cron job to clean up. Or a much more complex technique that cleans as it goes.
The Easy Way
The easy way is to make a different item for each "hit". With a single attribute which is the key. Pump the domain(s) with counts quickly and easily. When you need to fetch the count (presumable much less often) you have to issue a query
SELECT count(*) FROM domain WHERE key='myKey'
Of course this will cause your domain(s) to grow unbounded and the queries will take longer and longer to execute over time. The solution is a summary record where you roll up all the counts collected so far for each key. It's just an item with attributes for the key {summary='myKey'} and a "Last-Updated" timestamp with granularity down to the millisecond. This also requires that you add the "timestamp" attribute to your "hit" items. The summary records don't need to be in the same domain. In fact, depending on your setup, they might best be kept in a separate domain. Either way you can use the key as the itemName and use GetAttributes instead of doing a SELECT.
Now getting the count is a two step process. You have to pull the summary record and also query for 'Timestamp' strictly greater than whatever the 'Last-Updated' time is in your summary record and add the two counts together.
SELECT count(*) FROM domain WHERE key='myKey' AND timestamp > '...'
You will also need a way to update your summary record periodically. You can do this on a schedule (every hour) or dynamically based on some other criteria (for example do it during regular processing whenever the query returns more than one page). Just make sure that when you update your summary record you base it on a time that is far enough in the past that you are past the eventual consistency window. 1 minute is more than safe.
This solution works in the face of concurrent updates because even if many summary records are written at the same time, they are all correct and whichever one wins will still be correct because the count and the 'Last-Updated' attribute will be consistent with each other.
This also works well across multiple domains even if you keep your summary records with the hit records, you can pull the summary records from all your domains simultaneously and then issue your queries to all domains in parallel. The reason to do this is if you need higher throughput for a key than what you can get from one domain.
This works well with caching. If your cache fails you have an authoritative backup.
The time will come where someone wants to go back and edit / remove / add a record that has an old 'Timestamp' value. You will have to update your summary record (for that domain) at that time or your counts will be off until you recompute that summary.
This will give you a count that is in sync with the data currently viewable within the consistency window. This won't give you a count that is accurate up to the millisecond.
The Hard Way
The other way way is to do the normal read - increment - store mechanism but also write a composite value that includes a version number along with your value. Where the version number you use is 1 greater than the version number of the value you are updating.
get(key) returns the attribute value="Ver015 Count089"
Here you retrieve a count of 89 that was stored as version 15. When you do an update you write a value like this:
put(key, value="Ver016 Count090")
The previous value is not removed and you end up with an audit trail of updates that are reminiscent of lamport clocks.
This requires you to do a few extra things.
the ability to identify and resolve conflicts whenever you do a GET
a simple version number isn't going to work you'll want to include a timestamp with resolution down to at least the millisecond and maybe a process ID as well.
in practice you'll want your value to include the current version number and the version number of the value your update is based on to more easily resolve conflicts.
you can't keep an infinite audit trail in one item so you'll need to issue delete's for older values as you go.
What you get with this technique is like a tree of divergent updates. you'll have one value and then all of a sudden multiple updates will occur and you will have a bunch of updates based off the same old value none of which know about each other.
When I say resolve conflicts at GET time I mean that if you read an item and the value looks like this:
11 --- 12
/
10 --- 11
\
11
You have to to be able to figure that the real value is 14. Which you can do if you include for each new value the version of the value(s) you are updating.
It shouldn't be rocket science
If all you want is a simple counter: this is way over-kill. It shouldn't be rocket science to make a simple counter. Which is why SimpleDB may not be the best choice for making simple counters.
That isn't the only way but most of those things will need to be done if you implement an SimpleDB solution in lieu of actually having a lock.
Don't get me wrong, I actually like this method precisely because there is no lock and the bound on the number of processes that can use this counter simultaneously is around 100. (because of the limit on the number of attributes in an item) And you can get beyond 100 with some changes.
Note
But if all these implementation details were hidden from you and you just had to call increment(key), it wouldn't be complex at all. With SimpleDB the client library is the key to making the complex things simple. But currently there are no publicly available libraries that implement this functionality (to my knowledge).
To anyone revisiting this issue, Amazon just added support for Conditional Puts, which makes implementing a counter much easier.
Now, to implement a counter - simply call GetAttributes, increment the count, and then call PutAttributes, with the Expected Value set correctly. If Amazon responds with an error ConditionalCheckFailed, then retry the whole operation.
Note that you can only have one expected value per PutAttributes call. So, if you want to have multiple counters in a single row, then use a version attribute.
pseudo-code:
begin
attributes = SimpleDB.GetAttributes
initial_version = attributes[:version]
attributes[:counter1] += 3
attributes[:counter2] += 7
attributes[:version] += 1
SimpleDB.PutAttributes(attributes, :expected => {:version => initial_version})
rescue ConditionalCheckFailed
retry
end
I see you've accepted an answer already, but this might count as a novel approach.
If you're building a web app then you can use Google's Analytics product to track page impressions (if the page to domain-item mapping fits) and then to use the Analytics API to periodically push that data up into the items themselves.
I haven't thought this through in detail so there may be holes. I'd actually be quite interested in your feedback on this approach given your experience in the area.
Thanks
Scott
For anyone interested in how I ended up dealing with this... (slightly Java-specific)
I ended up using an EhCache on each servlet instance. I used the UUID as a key, and a Java AtomicInteger as the value. Periodically a thread iterates through the cache and pushes rows to a simpledb temp stats domain, as well as writing a row with the key to an invalidation domain (which fails silently if the key already exists). The thread also decrements the counter with the previous value, ensuring that we don't miss any hits while it was updating. A separate thread pings the simpledb invalidation domain, and rolls up the stats in the temporary domains (there are multiple rows to each key, since we're using ec2 instances), pushing it to the actual stats domain.
I've done a little load testing, and it seems to scale well. Locally I was able to handle about 500 hits/second before the load tester broke (not the servlets - hah), so if anything I think running on ec2 should only improve performance.
Answer to feynmansbastard:
If you want to store huge amount of events i suggest you to use distributed commit log systems such as kafka or aws kinesis. They allow to consume stream of events cheap and simple (kinesis's pricing is 25$ per month for 1K events per seconds) – you just need to implement consumer (using any language), which bulk reads all events from previous checkpoint, aggregates counters in memory then flushes data into permanent storage (dynamodb or mysql) and commit checkpoint.
Events can be logged simply using nginx log and transfered to kafka/kinesis using fluentd. This is very cheap, performant and simple solution.
Also had similiar needs/challenges.
I looked at using google analytics and count.ly. the latter seemed too expensive to be worth it (plus they have a somewhat confusion definition of sessions). GA i would have loved to use, but I spent two days using their libraries and some 3rd party ones (gadotnet and one other from maybe codeproject). unfortunately I could only ever see counters post in GA realtime section, never in the normal dashboards even when the api reported success. we were probably doing something wrong but we exceeded our time budget for ga.
We already had an existing simpledb counter that updated using conditional updates as mentioned by previous commentor. This works well, but suffers when there is contention and conccurency where counts are missed (for example, our most updated counter lost several million counts over a period of 3 months, versus a backup system).
We implemented a newer solution which is somewhat similiar to the answer for this question, except much simpler.
We just sharded/partitioned the counters. When you create a counter you specify the # of shards which is a function of how many simulatenous updates you expect. this creates a number of sub counters, each which has the shard count started with it as an attribute :
COUNTER (w/5shards) creates :
shard0 { numshards = 5 } (informational only)
shard1 { count = 0, numshards = 5, timestamp = 0 }
shard2 { count = 0, numshards = 5, timestamp = 0 }
shard3 { count = 0, numshards = 5, timestamp = 0 }
shard4 { count = 0, numshards = 5, timestamp = 0 }
shard5 { count = 0, numshards = 5, timestamp = 0 }
Sharded Writes
Knowing the shard count, just randomly pick a shard and try to write to it conditionally. If it fails because of contention, choose another shard and retry.
If you don't know the shard count, get it from the root shard which is present regardless of how many shards exist. Because it supports multiple writes per counter, it lessens the contention issue to whatever your needs are.
Sharded Reads
if you know the shard count, read every shard and sum them.
If you don't know the shard count, get it from the root shard and then read all and sum.
Because of slow update propogation, you can still miss counts in reading but they should get picked up later. This is sufficient for our needs, although if you wanted more control over this you could ensure that- when reading- the last timestamp was as you expect and retry.