How can I write a DAX cumulative measure that - not only - works over time - but also allows me to draw a line for different categories?
As soon as I drag a category into the 'Legend' of the line chart it does not work - all lines represent the cumulative of all categories.
this is what i expect based on my data ...:
Here is my DAX: The resulting 'total' cumulative is correct - but it does not break it down by legend / category?
CALCULATE(
SUM( [VALUE] ),
FILTER(
ALLSELECTED( Fact ),
Fact[Date] <= MAX( Fact[Date] )
)
)
Simply add a Category to the filter:
CALCULATE(
SUM( Fact[Value] ),
FILTER(
ALLSELECTED( Fact ),
Fact[Date] <= MAX( Fact[Date] )
&& Fact[Category] = MAX( Fact[Category] ) // <- This line
)
)
Related
I have a table summing Meter Hrs. The total is incorrectly showing zero. I need it sum all Meter Hrs values:
Meter Hrs is a measure defined as the following:
Meter Hrs Total =
CALCULATE(MIN('Tenna Hours'[total_hours]),LASTDATE('Tenna Hours'[import_date]))
- CALCULATE(MIN('Tenna Hours'[total_hours]),FIRSTDATE('Tenna Hours'[import_date]))
I thought this summing to zero, might be due to incorrect datatypes. [total_hours] as a datatype of Decimal Number and [import_date] has a datatype of Date/Time.
Try creating a SUMMARIZEd table:
Meter Hrs Total =
VAR MyTable =
ADDCOLUMNS(
SUMMARIZE( 'Tenna Hours', 'Tenna Hours'[Equipment] ),
"Meter Hrs",
CALCULATE(
MIN( 'Tenna Hours'[total_hours] ),
LASTDATE( 'Tenna Hours'[import_date] )
)
- CALCULATE(
MIN( 'Tenna Hours'[total_hours] ),
FIRSTDATE( 'Tenna Hours'[import_date] )
)
)
RETURN
IF(
ISFILTERED( 'Tenna Hours'[Equipment] ),
MINX( MyTable, [Meter Hrs] ),
SUMX( MyTable, [Meter Hrs] )
)
I have a measure (Users_1) that calculates number of rows between specific dates that have the parameter is_sql = 0.
This measure is used in a table alongside with other measures.
I have 5 more filters on the page that that should affect this specific measure and so I can not use All(users).
One of the filters on this page is "is_sql". And when is_sql = 1 every measure except for measure (Users_1) should change to correspondig value. Measure (Users_1) shold stay the same.
Now when I chose is_sql = 1 the measure (Users_1) is blank.
Users_1 =
CALCULATE(
COUNTROWS( 'users' ),
FILTER(
KEEPFILTERS('users'),
'users'[date (days)] <= MAX( 'Calendar'[Date] )
&&'users'[date (days)] >= MIN( 'Calendar'[Date] )
&&'users'[is_SQL] = 0
)
)
You will want to avoid using KEEPFILTERS on any filters you want to allow to look outside of the filter context:
Users_1 =
CALCULATE(
COUNTROWS( 'users' ),
'users'[date (days)] <= MAX( 'Calendar'[Date] ),
'users'[date (days)] >= MIN( 'Calendar'[Date] ),
'users'[is_SQL] = 0
)
I have the following dataset
user_id, login_date
111, 01/02/2021
222, 02/15/2021
444, 02/20/2021
555, 01/15/2021
222, 03/10/2021
444, 03/11/2021
I want to count of the number of unique active user_id in the last 90 days based on the max date of my date slicer. I'd like to solve this without using filters. This also needs to be dynamic as max date can be change from the date slicer.
From what I have understand so far I will need to evaluate for each row if the date difference between the current date of the row and the max date of the slicer is less than 90 days. Then for all the rows where the date diff is less than 90 days I will want to count the distinct number of users.
so basically I will have three layer in my final formula
evaluate the date diff.
filter out rows where the date diff is superior to 90 days.
count the distinct users in the remaining rows.
I've tried many approach and formula. I think that this one is closed to something that could work:
Measure test = CALCULATE(SUMX(DISTINCT(mytable[user_id]),filter(mytable,DATEDIFF(SELECTEDVALUE(mytable[login_date]),[Max range date],DAY)>90)))
this formula return me the following error :
The expression refers to multiple columns. Multiple columns cannot be converted to a scalar value.
I've also tried applying a if statement to output the date diff as 0 and 1 and hopefully being able to sum this for each unique id with something like this
SUMX( VALUES(my_table[user_id]), IF(DATEDIFF(SELECTEDVALUE(mytable[login_date]),[Max range date],DAY)>90,0,1)
anyway I'm kind of stuck. hopefully my question is clear enough.
You will have to create a disconnected table from which you will use the date column in the slicer, I have prepared a power BI file which included 2 very common scenarios, I hope that helps you.
File - Simon.pbix
Screenshot of the report - https://ibb.co/xjrjVv5
Screenshot of the model - https://ibb.co/Ws1z8D7
DAX Code -
for simon =
IF (
ISINSCOPE ( simon[Login Date] )
|| ISINSCOPE ( simon[User ID] ),
VAR LastVisibleDate =
CALCULATE (
MAX ( 'Simon Date Table'[Login Date] ),
ALLSELECTED ( 'Simon Date Table' )
)
VAR CurrentDate =
MAX ( simon[Login Date] )
VAR TimeJump = 90
VAR Result =
CALCULATE (
DISTINCTCOUNT ( simon[User ID] ),
simon[Login Date] <= LastVisibleDate,
simon[Login Date] > LastVisibleDate - TimeJump,
ALLSELECTED ( simon )
)
RETURN
Result
)
second version:
for simon 2 =
IF (
ISINSCOPE ( simon[Login Date] )
|| ISINSCOPE ( simon[User ID] ),
VAR LastVisibleDate =
CALCULATE (
MAX ( 'Simon Date Table'[Login Date] ),
ALLSELECTED ( 'Simon Date Table' )
)
VAR CurrentDate =
MAX ( simon[Login Date] )
VAR TimeJump = 90
VAR Result =
IF (
CurrentDate <= LastVisibleDate,
CALCULATE (
DISTINCTCOUNT ( simon[User ID] ),
simon[Login Date] <= CurrentDate,
simon[Login Date] > CurrentDate - TimeJump,
ALLSELECTED ( simon )
)
)
RETURN
Result
)
Can anyone help me in creating a YTD Average % Calculation?
I have created 'A','B' by using the following DAX
A = DIVIDE([Current Month W/Allowance Over 90 $],[Aging],0)
B = DIVIDE([UnderBill],[OverBill],0)
Now I need to create a YTD Average % calculation based on the above two calculations.
This is what I am looking for
YTD A = Average of 'A' ( This average should be YTD)
YTD B = Average of 'B' ( This average should be YTD)
enter image description here
So if we look at YTD A for 08/01/2019, in excel I did the Average =AVERAGE(C2:C9) and the result is 65% and for next month it should be =AVERAGE(C2:C10) and the result is 61%
Assuming there is a Calendar table which has Calendar[Year] and Calendar[YearMonth] columns, here is a possible solution.
YTD A =
-- Calculated table which has start and end dates of each YTD months
VAR YearMonthsYTD =
ADDCOLUMNS(
SUMMARIZE(
FILTER(
ALL( 'Calendar' ),
'Calendar'[Year] = MAX( 'Calendar'[Year] )
&& 'Calendar'[Date] <= MAX( 'Calendar'[Date] )
),
'Calendar'[YearMonth]
),
"#StartDate", CALCULATE( MIN( 'Calendar'[Date] ) ),
"#EndDate", CALCULATE( MAX( [Date] ) )
)
-- Calculate the value for each month, and get the average of them
RETURN
AVERAGEX(
YearMonthsYTD,
CALCULATE(
[A],
FILTER(
ALL( 'Calendar' ),
'Calendar'[Date] >= [#StartDate] && 'Calendar'[Date] <= [#EndDate]
)
)
)
BTW, although I'm ignorant in accounting, I'm doubting if your calculation logic is correct, because average of percentages will not be an appropriate measure in general.
In the presented scenario, wouldn't this definition be more appropriate?
YTD A = DIVIDE( <YTD value of numerator>, <YTD value of denominator> )
This is not only to make it correct, but also to make it much easier.
YTD A = CALCULATE( [A], DATESYTD( 'Calendar'[Date] ) )
Forgive and forget if this is not a relevant comment for the specific case of OP.
I am trying to calculate the gradient of the trendline passing through a series of points contained within my dataset. I have researched to see if there are built in functions to do this and there doesn't seem to be, so I am doing it manually. I'm not a DAX expert (nor probably a maths expert either!).
I have created a table in excel to walk through a simple example so I know what I'm aiming for:
In the Power BI environment, there are two tables joined on the "Month&Year" columns. An abridged illustration of these tables is below:
Please note the "Orders" measure from the illustration is referred to as "Special orders per day" in the Power BI code.
Step 1
Create the measure that averages the month numbers:
Average of months =
- AVERAGEX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"AvMonths", AVERAGE ( Query_GSR[MonthNumForSlope] )
),
[AvMonths]
)
I use AVERAGE in the expression part so that the record for Sept-2018 has a 21 in the "AvMonths" column and then for Oct-2018 it says 22. I guess I could have used MIN or MAX because they will all say 21 or 22 depending on the month (only one to avoid would be SUM as this would add them all up).
I also tried to do this by summarizing and then creating a NATURALLEFTOUTERJOIN to the User_Friendly_Months table to get the month number for these months and when incorporating that into the rest of this procedure the measure took forever to calculate (even though it actually worked in the end somehow).
Step 2
Do the same for orders:
Average of special orders =
- AVERAGEX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"Special OPD", [Special orders per day]
),
[Special OPD]
)
Step 3
Perform the calculation that goes through to step "C" in my original picture:
Column_C_Step =
SUMX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"Special OPD", [Special orders per day],
"MonthNum", AVERAGE ( Query_GSR[MonthNumForSlope] )
),
( [Special OPD] + [Average special orders] )
* ( [MonthNum] + [Average of MonthNums] )
)
Instead of returning -11.95 in my example, the measure returns zero.
When I do this:
Check_orders_worked =
SUMX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"Special OPD", [Special orders per day],
"MonthNum", AVERAGE ( Query_GSR[MonthNumForSlope] )
),
[Special OPD]
)
...I get 1188.9, which is the total of "Orders" in my Excel table illustration (so must be working).
When I do this:
Check_months_worked =
SUMX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"Special OPD", [Special orders per day],
"MonthNum", AVERAGE ( Query_GSR[MonthNumForSlope] )
),
[MonthNum]
)
...I get 43, which is the total of Month_Num in my illustration (so again, must be working).
But when I attempt to perform the equivalent of a SUMPRODUCT on A and B to get C, it returns zero.
Can anyone shed any light on what on earth is going on??
It is driving me insane.
Or if there is a simpler way to do a gradient calculation I will cry with joy.
Thank you
UPDATE
For completeness here is the measure that worked:
Step_C_Measure =
VAR _OrdersAverage = [Average special orders]
VAR _MonthsAverage = [Average of MonthNums]
RETURN
SUMX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"Special OPD", [Special orders per day],
"MonthNum", AVERAGE ( Query_GSR[MonthNumForSlope] )
),
( [Special OPD] + _OrdersAverage )
* ( [MonthNum] + _MonthsAverage )
)
Then Step D:
Step_D_Measure =
VAR _MonthsAverage = [Average of MonthNums]
RETURN
SUMX (
SUMMARIZE (
CALCULATETABLE ( Query_GSR, ALLSELECTED ( User_Friendly_Months ) ),
Query_GSR[Month&Year],
"Special OPD", [Special orders per day],
"MonthNum", AVERAGE ( Query_GSR[MonthNumForSlope] )
),
( [MonthNum] + _MonthsAverage )
* ( [MonthNum] + _MonthsAverage )
)
And finally to get the gradient:
Special order gradient =
DIVIDE ( Step_C_Measure, Step_D_Measure, "" )
In a question about multiple linear regression, I linked to a community post that covers basic linear regression.
In your case, the formula for the slope can be calculated similar to this:
Slope =
VAR RowCount = COUNTROWS(Query_GSR)
VAR Sum_X = SUMX(Query_GSR, Query_GSR[Month_Num])
VAR Sum_Y = SUMX(Query_GSR, Query_GSR[Orders])
VAR Sum_XY = SUMX(Query_GSR, Query_GSR[Month_Num] * Query_GSR[Orders])
VAR Sum_XX = SUMX(Query_GSR, Query_GSR[Month_Num] * Query_GSR[Month_Num])
RETURN DIVIDE(RowCount * Sum_XY - Sum_X * Sum_Y, RowCount * Sum_XX - Sum_X * Sum_X)
This works for a regression on multiple months, not just two.