DATEDIFF between 2 rows aggregating by 2 columns DAX - powerbi

I have a big data set with the structure as shown below.
Operation
User
Timestamp
Elapsed time
12
1
2018-01-03
11:19:02 AM
12
1
2018-01-03
12:34:02 PM
12
1
2018-01-04
8:34:02 AM
12
2
2018-02-03
9:34:02 AM
12
2
2018-02-03
11:12:42 AM
12
3
2018-02-03
12:12:00 PM
15
1
2018-01-02
9:22:32 AM
15
1
2018-01-02
9:25:32 AM
15
2
2018-01-02
9:25:32 AM
The goal is to form the column "Elapsed Time" using DAX and PowerBI. The column shows the difference/duration between the current timestamp and previous timestamp for the same user and the same operation.
I've tried something along the lines of:
Elapsed time =
DATEDIFF (
CALCULATE (
MAX ( data[Timestamp] ),
ALLEXCEPT ( data, data[Operation], data[User] ),
data[Timestamp] < EARLIER ( data[Timestamp] )
),
data[Timestamp],
MINUTE
)
`
But it complains about a single value for column 'Timestamp' in table 'data' cannot be determined. this can happen when a measure formula refers to a column that contains many values without specifying an aggregator such as min, max, count, or sum to get a single result.
I'm very new to DAX, so I'd appreciate any help.

Since the 'Table'[Operation] and 'Table'[User] of the current row are to be used as filter, a very simple approach might just use CALCULATE to trigger the context transition, transforming the current row context to the corresponding filter context, and then to replace the filer over 'Table'[Timestamp] to be less than the current Timestamp, previously saved to a variable. The context transition automatically sets the correct filters over 'Table'[Operation] and 'Table'[User]
Elapsed time =
VAR CurrentTimestamp = 'Table'[TimeStamp]
RETURN
DATEDIFF (
CALCULATE ( MAX ( 'Table'[Timestamp] ), 'Table'[Timestamp] < CurrentTimestamp ),
CurrentTimestamp,
MINUTE
)

Typing on the mobile, so apologies for possible errors. Assuming this is a calculated column:
Elapsed time =
DATEDIFF (
CALCULATE (
MAX ( Table[Timestamp] ),
FILTER (
Table,
Table[User] = EARLIER ( Table[User] )
&& Table[Operation] = EARLIER ( Table[Operation] )
&& Table[Timestamp] < EARLIER ( Table[Timestamp] )
)
),
Table[Timestamp],
MINUTE
)
Where Table is your table name.
There surely are now ways to do that, do apologies for non optimal approach.

Related

PowerBI: Using a measure to get a rank

I'm kind of new to PowerBi, but I haven't been able to find an answer to my problem.
I have this kind of dataset:
Timestamp ErrorType Duration (ms)
16/05/10 8:00 3 100
16/05/10 8:00 4 1000
17/05/10 10:00 3 100
18/05/10 8:00 3 200
18/05/10 10:00 4 200
18/05/10 10:00 5 50
19/05/10 10:00 5 10
19/05/10 10:00 5 10
The names are hopefully pretty self explanatory: Timestamp is the time at which the issue occured, the ErrorType is a code to know what kind of error that is, and the Duration indicates how long the issue lasted. What I'd like to do is essentially make a measure that would give me the rank of the error type, taking into account any filters I could use on the page.
For example, should I have restricted myself to the time period 17/05 to 19/05, the measure for 3 would give me 1, and the measure for 4 would give me 2, whereas that would be the opposite should I sample over all the time scale. In the all cases, the measure for 5 would give me 3. The first case is illustrated in the table down below:
ErrorType Rank
3 1
4 2
5 3
The idea behind this is to be able to color code a graph using their importance in the specified time frame. For example, for error that lasted the longest (cummulative) on the specified time frame, then it shall be colored red; orange for the second; and all that for the first eight. That part I know how to do.
I've already tried something, but it just won't work, for some reason it says that it can't find the column called "Total" in the Table I've just created...
Color =
VAR TotalTime = CALCULATE(
SUM('AuFilDeLEau'[Duration]),
ALLSELECTED('AuFilDeLEau'[ErrorCode])
)
VAR ColumnTotalTime = (SUMMARIZE(Table, Table[Duration], "Total", SUM(Table[Duration])))
VAR RankforColor = (RANKX(ColumnTotalTime, [Total],TotalTime))
return RankforColor
Currently, this only gives me back 1 for each and every ErrorType.
I hope my issue is clear, and you'll be able to help me, thanks in advance ^^
EDIT: tried smpa01's solution, didn't work, it did this:
EDIT: tried smpa01's second solution, it worked. Marked as solved. Thanks !
_rank =
VAR _1 =
MAX ( 'Table'[ErrorType] )
VAR _2 =
RANKX (
FILTER ( ALLSELECTED ( 'Table' ), 'Table'[ErrorType] = _1 ),
CALCULATE ( MAX ( 'Table'[Duration] ) ),
,
ASC,
DENSE
)
RETURN
_2
EDIT : Adding this portion after revised data
_revisedRank =
IF (
HASONEVALUE ( 'Table'[ErrorType] ) = TRUE (),
RANKX (
ALL ( 'Table'[ErrorType] ),
CALCULATE ( SUM ( 'Table'[Duration (ms)] ) ),
,
DESC
)
)

How can I rank the total number of calls taken by a rep for each date in Power Bi/DAX?

I'm looking to create rankings that answer this goal:
For each Date, rank number of calls taken from highest to lowest per rep
Maintain this ranking for each date, regardless of how many dates are included
What I want to end up with:
Name
Date
Total Calls
Rank
Rep A
11/10/2020
27
3
Rep B
11/10/2020
28
2
Rep C
11/10/2020
29
1
Rep A
11/11/2020
27
3
Rep B
11/11/2020
28
2
Rep C
11/11/2020
29
1
I've found enough information on how to rank across all dates, but I can't figure out how to rank in the row context of each specific date. Any help would be appreciated!
Update:
I followed the instructions below which were really helpful, but came up with this ranking where total calls are not in order:
Ranking Not in Order of Most Calls
Here is the DAX as I have it:
Call Rank =
IF (
ISINSCOPE ( Final[Name] ),
CALCULATE (
RANKX (
ALL ( Final[Name] ),
CALCULATE (
SUM ( Final[Total Calls] )
)
),
ALLEXCEPT (
Final,
Final[Data Period],
Final[Name]
)
)
)
Assuming your input table is Calls, taken from your table
We can define a measure to compute per each day the ranking of Name per number of calls
RankPerDay =
IF (
ISINSCOPE ( Calls[Name] ),
CALCULATE (
RANKX (
ALL ( Calls[Name] ),
CALCULATE (
SUM ( Calls[Total Calls] )
)
),
ALLEXCEPT (
Calls,
Calls[Date],
Calls[Name]
)
)
)
Fist we check if the measure is evaluated at the Name level of granularity, otherwise we return BLANK(). Then, we use CALCULATE and ALLEXCEPT to remove any existing filter context keeping only the one over Name and the one over Date. With this modified filter context, we call RANKX specifying to build a ranking table containing all Names and the total minute calls per Date and Name, in order to retrieve the current Name ranking.
This is the resulting matrix visual with Calls[Date] and Calls[Name] on the rows

DAX AVERAGEX including a 0 for total average

My table represents users working on a production line. Each row in the table provides the number of units a user produced within a 15 minute window. I am trying to calculate Units/Hour per User (which seems to be working fine), but my overall Average seems to be off.
Table and results of my measure:
Row by row it is what I am looking for. But the total average of 179.67 is wrong. It should be 196. I think for the 11:30 timestamp, Leondro did not have any work, and it is including a 0 for him. I would like to exclude that.
Measure:
UPH =
var unitshour = CALCULATE(SUM(Table1[Units]) / (DISTINCTCOUNT(Table1[DateTime])/4))
var users = AVERAGEX( VALUES(Table1[DateTime]), DISTINCTCOUNT(Table1[Username]))
RETURN
unitshour/ users
I don't think 196 is the number you want if you want to treat each time period equally. I'd suggest this alternative:
UPH =
AVERAGEX (
VALUES ( Table1[DateTime] ),
CALCULATE ( 4 * SUM ( Table1[Units] ) / DISTINCTCOUNT ( Table1[Username] ) )
)
If you want each time period to be weighted by the number of users in that time period, then the 196 it what you want.
UPHUserWeighted =
VAR Summary =
SUMMARIZE (
Table1,
Table1[DateTime],
Table1[Username],
"UPH", 4 * SUM ( Table1[Units] ) / DISTINCTCOUNT ( Table1[Username] )
)
RETURN AVERAGEX ( Summary, [UPH] )

PowerBI - Cumulative Total with Multiple Criteria

New to PowerBI, so forgive me for the description here. I'm working with a dataset of retail headcount sensors, which gives me a table of locations, timestamps, and a count of shoppers:
Room TimeStamp Count_In
123 3/13/2019 8
456 4/4/2019 9
123 3/28/2019 11
123 3/18/2019 11
456 3/22/2019 3
etc...
I'm trying to calculate a running total for each "room" over time. The overall running total column is easy:
C_In =
CALCULATE (
SUM ( Sheet1[In] ),
ALL ( Sheet1 ),
Sheet1[Time] <= EARLIER ( Sheet1[Time] )
)
But I'm unable to figure out how to add that second filter, making sure that I'm only summing for each distinct location. Help is appreciated!
Your ALL function removes all context on Sheet1, try using ALLEXCEPT to keep the row context of the Room.
C_In =
CALCULATE (
SUM ( Sheet1[In] ),
ALLEXCEPT ( Sheet1, Sheet1[Room] ),
Sheet1[Time] <= EARLIER ( Sheet1[Time] )
)

PowerPivot 3 month rolling average

I need some help creating a measure in PowerPivot. I have Googled and tried all the options I could find with success. I have a Fact table with sales leads. Some of the leads gave us a sell some not. I need to measure the value of the leads. I sum the values and divide it with the number of records I have in the table
Average Total of Leads:=calculate(Table1[Sum of Value]/Table1[Count of Lead name])
My problem is to create the measure which give me 3 months rolling average.
I have tried:
Roll3Average:=[Average Value of leads]/CALCULATE(DISTINCTCOUNT(dimdate[MonthName]),
DATESINPERIOD(dimdate[Dates],
LASTDATE(dimdate[Dates]),-3,Month
)
)
I have tried:
Rolling3Average:=IF(COUNTROWS(VALUES(dimdate[MonthName])) = 1,
CALCULATE(
[Average of Value]/ COUNTROWS(VALUES(dimdate[MonthName] ) ) ,
DATESBETWEEN(
dimdate[Dates],
FIRSTDATE(PARALLELPERIOD(dimdate[dates], -2, MONTH)),
LASTDATE(PARALLELPERIOD(dimdate[dates], 0, MONTH))
), ALL(DimDate)
)
)
I have tried:
Total Sales rolling:=
CALCULATE (averagex(Table1,[Average Total of deals]),
FILTER (ALL ( dimdate),dimdate[Month] >= MAX (dimdate[Month]) -2&& dimdate[Month] <= MAX ( DimDate[Month])))
I cannot get it right.
I hope someone can see where I go wrong.
#Marcus
Click here and see my datamodel Thanks
I have still troubles with my data mode.
I have linked a very simplified example. I hope someone can help me.
Thank you
Note in the example I am using, I am using [Sales] instead of leads.
The main structural change you will want to make is to create a month_index in your dimDates table. The advantage of have that field is that it makes calculating the total over 3 months easier, since it removes having to handle cross year like in your 2nd example. The other advantage is having a month_index will handle non standard calendars, e.g. 4-4-5
To start with:
Sales:=SUM(Data[Qty])
The next part would be to calculate the sales(or leads) over the 3 months. The below we are using the month_index field to quickly define the date range in which we want to sum.
3_Month_Sales :=
CALCULATE (
[Sales],
FILTER (
ALL ( dimdate ),
dimdate[Month_index] <= MAX ( dimdate[Month_index] )
&& dimdate[Month_index]
>= MAX ( dimdate[Month_index] ) - 2
)
)
The next part depends on the ask, since the average could be calculated two ways. The main question is it a 3 month average, or is it based on the numerator being the number of months that have a value greater then 0.
The simple way:
3_Month_Average:=DIVIDE( [3_Month_Sales], 3)
The more complex way, in which I learned you can wrap SUMX in a calculate. The idea being the calculate is looking at the period for your 3 months, and then the sumx is iterating down by year then Month. At the end it is checking if sales is greater then 0, if it then 1 is assigned. Then those 1's are summed
Count_of_Periods :=
CALCULATE (
SUMX (
VALUES ( dimdate[Year] ),
SUMX ( VALUES ( dimdate[Month] ), IF ( [Sales] > 0, 1 ) )
),
FILTER (
ALL ( dimdate ),
dimdate[Month_index] <= MAX ( dimdate[Month_index] )
&& dimdate[Month_index]
>= MAX ( dimdate[Month_index] ) - 2
)
)
And then finally
3_Month_Alternative:=DIVIDE([3_Month_Sales], [Count_of_Periods])
Below would be an image using some random sample data, and how the different fields interact. As part of the example the April 2017 data was removed, to show how the count_of_periods calculation was able to handle the fact there was no data in that period