how can i use wso2 siddhiQL Cache data? - wso2

My question is like this, I define a data source access to real-time data per second (has), I will for this batch of data per hour for an average calculation, and then to have this group of data per hour average accumulative sum, finally put the cumulative value of an hour, to another data source.Such as the mean value is 10 1 hour, I will give 10 to a data source, the second hour average is 20, then I will add 1 hour on average 10 output 30, so on, to the end of the 24 hours a day, the second day in from the first hour is calculated.Problem now is how can I, cache the last time I calculate the cumulative values, how to solve the similar problem?
eg: enter image description here
Just like the picture above, in the column of the avg data is I through instream_ # window.externalTimeBatch (datetime, 1 hour) can get, at the back of the column is the result of the output per hour, I think.
like this:enter image description here
Every compute a result is output

You can use Event Table to cache the data in relation database like postgres/mysql/oracle.
Fist define table from database :
#From(eventtable='rdbms', datasource.name='db_event', table.name='_tag_time_value')
define table tEvent (tag string, datetime long , value double);
and if the stream time reach , you can insert overwrite 0.0 into the event table

Related

Calculate daily average including zero in Power BI

I have a dataset of patients visiting several categories(SPECIALISM) of a hospital. A visit lasts a couple of hours each day. Each row in my dataset represents an hour that they are present for a certain hospital specialism.
Input
I want to calculate for each hour of the day, the number of patients that are present on average, per specialism, I used the following code (measure):
daggem = AVERAGEX(values('Date'[Date]),[distinctpat])
with distinctpat being a distinct count of patient IDs
This gives me almost the desired result, but the tales of the graph are too heavy (it shows an average of 1 patient during the night, but this 1 patient was there only on 1 specific day, normally it is zero. But the average, as I calculated, it does not include all other nights when there were zero patients. So I would like to obtain an average that is much lower (and more correct)
Output
You probably need an extra table with all the days and times. Based on the reports, you will be able to find the hours without visits
Here is an example of how to create a Date table with hours; Add relationship and use in calculation.
DatesWithHours =
var Dates = CALENDAR("2021-01-01 00:00:00", "2021-01-03 00:00:00")
var DatesHour =
GENERATE(Dates, (ADDCOLUMNS(GENERATESERIES(1,12,1),"Hours", [Date] + TIME([Value],0,0))))
return
DatesHour
That's the problem with AVERAGEX, it ignores blanks. Try a simple division instead.
daggem =
DIVIDE (
COUNTROWS ( TargetTable ),
COUNTROWS ( Dates )
)

Date filtering of DAX measure in pivot chart

I have created 3 DAX measures in Power Pivot to calculate backlog value and I have an issue with filtering in pivot chart. When I filter out a specific time period, the backlog calculates only based on filtered dates. Instead of filtering, I would like to only "zoom in" to some specific time-range.
Measures
Opened:=CALCULATE(COUNTA([CreatedOn]))
Closed:=CALCULATE(COUNTA(Ticket[ClosedOn]),USERELATIONSHIP(Ticket[ClosedOn],'Calendar'[Date]))
Backlog:=CALCULATE([Opened]-[Closed],FILTER(ALLSELECTED('Calendar'),'Calendar'[Date]<=MAX('Calendar'[Date])))
Based on example data below, without filtering any dates the backlog value is following:
Date;Backlog
1/1/2021;1
2/1/2021;3
3/1/2021;1
4/1/2021;3
When I filter dates excluding 1st of January, I get following values:
Date;Backlog
2/1/2021;2
3/1/2021;1
4/1/2021;3
I want to get same values as without filtering the date. I just want to to "zoom in" on some specific date range. In my case i have 1 year of input data and want to see the backlog evolution of last 8 weeks. So when I filter last 8 weeks, my first week shows only value calculated for that week, but instead it should take into account all the values from previous periods + the result of first week
Example data
TicketNumber;Type;CreatedOn;ClosedOn;Status
ticket1;Service Request;1/1/2021;1/3/2021;Closed
ticket2;Incident Record;1/2/2021;1/3/2021;Closed
ticket3;Incident Record;1/2/2021;1/5/2021;Closed
ticket4;Service Request;1/4/2021;;Open
ticket5;Service Request;1/4/2021;;Waiting for
Fixed thanks to my colleague! :)
I had to change ALLSELECTED to ALL
Backlog:=CALCULATE([Opened]-[Closed],FILTER(ALL('Calendar'),'Calendar'[Date]<=MAX('Calendar'[Date])))

How to calculate accumulated time for a defined frequency?

I have rows containing descriptions of services that have been ordered by our customers.
Table:
OrderedServices
Columns:
Id (key)
CustomerId
ServiceId
StartDate
EndDate
AmountOfTimeOrdered (hours)
IntervalType (month, week or day)
Interval (integer)
An example:
1;24343;98;2020-01-20;2020-06-05;1.5;day;3
The above is read as ”Customer w/ id 24343 has ordered service #98 to be executed 1.5hrs every 3rd day during the period 2020-01-20 up until 2020-06-05”
The first day of execution is always StartDate, so, in the given example, the services is first executed 2020-01-20, followed by 2020-01-23 (20+3), 2020-01-26, 2020-01-29 aso.
Now I want to calculate the total amount of time executed for a given ServiceType for a given time period.
E.g. 2020-01-01 - 2020-01-31 = 4 x 1.5 = 6hrs in total executed time for the above.
What I can’t figure out is how to create a measure, or a calculated table to achieve this.
Does anyone have an idea?
Kind regards,
Peter
Go to the query editor and use the following stepts:
If your column looks like in your example use as first step Split Column by Delimiter.
After this just add the following custom column:

Power BI Count time values

I'm working in Power BI. I've got some pallets registerd on specific times during a day. I want a chart that shows the registerd times of the day with a goal line of 2 hours 02:00:00.. Any Idea how to show this chart?
Thanks!
BR
So, not sure if you also have a date table which would make this a lot easier to visualize but given the fields you have provided, I was able to come up with this solution:
Convert your time stamp by creating a calculated column representing true time (I couldn't figure out how to change formatting to display time correctly) so I just added 12 hours to your values:
True Time :=
Table[Time] + TIME ( 12, 0, 0 )
Then take the difference between your 'Goal Line' and the Time Stamp and insert those values into a calculated column:
Goal Line and TimeStamp Delta :=
DATEDIFF ( TIME ( 14, 0, 0 ), Sheet1[True Time], MINUTE )
Now you using an area chart with you 'Barcode' as your "Axis" and the [Goal Line and TimeStamp Delta] measure as your "Values" you can create a visual that will show all bar codes that have been processed before 2 pm as negative and everything passed 2 pm as positive.

Running total not aggregating on line chart

A table with some sales data has an associated running total measure. When viewed in Data view of PowerBi Desktop the data does reflect an aggregated total.
However when applied to a line chart the running total is simply the monthly totals. The expectation would be that a running total never decreases (assuming only positive sales) and that the line chart would reflect the values in the measure. So month by month should actually be 500, 1500, 3000.
Update 1: As per Foxans suggestion - same result:
Update 2: Works when using an index instead of a date (dd/MM/yyyy):
You ISONORAFTER filter should be based on Date instead of Spend for a running total (Or in the absence of a date column, should be the column which can identify the order you're trying to summarize, e.g. an incremental index), i.e.
Spend running total in Date =
CALCULATE(
SUM(Spend[Spend]),
FILTER(
ALLSELECTED(Spend),
ISONORAFTER(Spend[Date], MAX(Spend[Date]), DESC)
)
)
It's causing some confusion here because your sample data in Spend column is coincidentally in ascending order of values (100 -> 200 -> 300). If you update it to some random data, you'll notice it won't work in the first place.