So I have a variable
var varSubItem = CALCULATE (MAX(Outages[SubItem]), Outages[DATE] >= DATE(2019, 07, 14) )
to calculate out items that have had an outage within 1 day. See below.
Then I have another variable
var data =
CALCULATE (
COUNT ( Outages[CASE_ID] ),
ALLSELECTED ( Outages ),
Outages[SubItem] = devices
)
which gives me back the outage count for the devices in the last 2 years. It's only the last two years because my table visual has a filter for that time frame.
I pray that I'm making sense because I have been trying to do this for 2 weeks now.
Devices w Outages 2Yr =
VAR devices =
CALCULATE ( MAX ( Outages[DEVICE_ID] ), Outages[DATE] >= DATE ( 2019, 07, 14 ) )
VAR data =
CALCULATE (
COUNT ( Outages[CASE_ID] ),
ALLSELECTED ( Outages ),
Outages[DEVICE_ID] = devices
)
RETURN data
I'm getting this,
| Area | Item | SubItem | Case | Date | Outage Count |
|--------|------|---------|-----------|-----------------|--------------|
| XXXXX' | ABC1 | 123A | 123456789 | 7/14/19 1:15 AM | 1 |
| | ABC2 | 123B | 132456798 | 7/14/19 3:20 AM | 1 |
| | ABC3 | 123C | 984561325 | 7/14/19 6:09 PM | 1 |
| | ABC4 | 123D | 789613453 | 7/14/19 3:54 PM | 3 |
| | ABC5 | 123E | 335978456 | 7/14/19 2:10 PM | 2 |
| Total | | | | | 8 |
When I should be getting this,
| Area | Item | SubItem | Case | Date | Outage Count |
|--------|------|---------|-----------|-----------------|--------------|
| XXXXX' | ABC1 | 123A | 123456789 | 7/14/19 1:15 AM | 1 |
| | ABC2 | 123B | 132456798 | 7/14/19 3:20 AM | 1 |
| | ABC3 | 123C | 984561325 | 7/14/19 6:09 PM | 1 |
| | ABC4 | 123D | 789613453 | 7/14/19 3:54 PM | 1 |
| | ABC4 | 123D | 789613211 | 4/19/18 4:20 AM | 1 |
| | ABC4 | 123D | 789611121 | 9/24/17 5:51 AM | 1 |
| | ABC5 | 123E | 335978456 | 7/14/19 2:10 PM | 1 |
| | ABC5 | 123E | 335978111 | 2/21/19 7:19 AM | 1 |
| Total | | | | | 8 |
I think what you want is closer to this:
Devices w Outages 2Yr =
VAR devices =
CALCULATETABLE (
VALUES ( Outages[SubItem] ),
ALLSELECTED ( Outages ),
Outages[DATE] >= TODAY() - 1
)
RETURN
CALCULATE (
COUNT ( Outages[Case] ),
FILTER ( Outages, Outages[SubItem] IN devices )
)
This creates a list of SubItem values rather than the single one you get with MAX and that's where your ALLSELECTED function needs to go.
Edit: To total at the SubItem level try this tweak:
Devices w Outages 2Yr =
VAR devices =
CALCULATETABLE (
VALUES ( Outages[SubItem] ),
ALLSELECTED ( Outages ),
Outages[DATE] >= TODAY() - 1,
VALUES ( Outages[SubItem] )
)
RETURN
CALCULATE (
COUNT ( Outages[Case] ),
ALLSELECTED ( Outages ),
Outages[SubItem] IN devices
)
The exact logic here is a bit complex for a beginner DAX user, but just keep in mind that DAX is all about filters.
For the variable devices, we want a list of all SubItem values in the current context subject to a date constraint. The CALCULATETABLE function allows us to modify our filter context. The ALLSELECTED function is a table filter removes any filter context from the visual so that all Date and Case values that aren't filtered out by slicers or page/report level filters are included. Otherwise, you'd get blanks for rows that have dates before TODAY()-1. The date value boolean filtering is self-explanatory, but then I add another table filter at the end, VALUES(Outages[SubItem]), to add back the SubItem context from the visual.
The CALCULATE piece functions similarly. We count all the Case values after altering the filter context to remove filter context on Case and Date and only taking SubItem values from the list generated in the variable.
Related
I have a measure which displays number of employees in relation to the date.
Each day the FactEmployee is updated to reflect who is working. this means that my measure (obviously) can't display how many employees there are tomorrow.
I would like to persist the latest value (ie. todays value) into the future.
Data model
My (not perfect) measure
Count, employee :=
VAR today = TODAY()
VAR res =
IF (
MAX ( DimDate[fulldate] ) > today,
CALCULATE (
COUNT ( DimEmployee[emp_key] ),
FILTER ( ALL ( FactEmployee ), RELATED ( DimDate[fulldate] ) = today)
),
CALCULATE ( COUNT ( DimEmployee[emp_key] ), FactEmployee )
)
RETURN
res
Output
year-month count, emp
---------------------------
2020-01 182
2020-02 180
2020-03 174
2020-04 171
2020-05 171
2020-06 173
2020-07 172
2020-08 175
2020-09 172
Expected Output
year-month count, emp
--------------------------
2020-01 182
2020-02 180
2020-03 174
2020-04 171
2020-05 171
2020-06 173
2020-07 172
2020-08 175
2020-09 172
2020-10 172 <----repeated value from 2020-09
2020-11 172 <----repeated value from 2020-09
2020-12 172 <----repeated value from 2020-09
how can i fix my measure to get the missing values (oktober to december)?
I have replicated your model using a simplified version, I don't think you need dimEmployee in this case.
Assuming your model is like this
And your tables look like these:
FactEmployee
+----------+---------+
| date_key | emp_key |
+----------+---------+
| 20200101 | 1 |
+----------+---------+
| 20200102 | 1 |
+----------+---------+
| 20200103 | 1 |
+----------+---------+
| 20200104 | 1 |
+----------+---------+
| 20200105 | 1 |
+----------+---------+
| 20200101 | 2 |
+----------+---------+
| 20200102 | 2 |
+----------+---------+
| 20200104 | 2 |
+----------+---------+
| 20200101 | 3 |
+----------+---------+
| 20200102 | 3 |
+----------+---------+
| 20200103 | 3 |
+----------+---------+
| 20200104 | 3 |
+----------+---------+
| 20200105 | 4 |
+----------+---------+
DimDate
+------------+----------+
| Date | Date_key |
+------------+----------+
| 01/01/2020 | 20200101 |
+------------+----------+
| 02/01/2020 | 20200102 |
+------------+----------+
| 03/01/2020 | 20200103 |
+------------+----------+
| 04/01/2020 | 20200104 |
+------------+----------+
| 05/01/2020 | 20200105 |
+------------+----------+
| 06/01/2020 | 20200106 |
+------------+----------+
| 07/01/2020 | 20200107 |
+------------+----------+
I have created a calculation that follow these steps:
Compute the maximum date with valid or non blank values for the distinct count of emp key, under the variable MaxDateKey.
IF statement evaluated for date_key greater than 'MaxDatekey' - in this case 20200106 and 20200107. For those dates, the calculation retrieves the distinct count of emp_key for MaxDateKey.
When the IF stamenet is false, distinct count is calculated as usual.
Count =
VAR MaxDateKey =
CALCULATE (
LASTNONBLANK ( FactEmployee[date_key], DISTINCTCOUNT ( FactEmployee[emp_key] ) ),
REMOVEFILTERS ( DimDate[Date] )
)
VAR Result =
IF (
MAX ( DimDate[Date_key] ) > MaxDateKey,
CALCULATE (
DISTINCTCOUNT ( FactEmployee[emp_key] ),
ALL ( DimDate[Date] ),
DimDate[Date_key] = MaxDateKey
),
DISTINCTCOUNT ( FactEmployee[emp_key] )
)
RETURN
Result
The output below. The values from the last valid date 5th of Jan is applied to the subsequent dates (6th and 7th of Jan).
For line chart, you can check the Forecast option from the Analytics pane as shown below.
The output will be something like below-
How to calculate median of category sums? I have sample data:
+----------------+-----------+
| category | sales |
+----------------+-----------+
| a | 1 |
| a | 2 |
| a | 4 |
| b | 1 |
| b | 3 |
| b | 4 |
| c | 1 |
| c | 4 |
| c | 5 |
+----------------+-----------+
+----------------+-----------+
| category | sales_sum |
+----------------+-----------+
| a | 7 |
| b | 8 | <- This median
| c | 10 |
+----------------+-----------+
| median of sums | 8 | <- This is expected results, regardless row context
+----------------+-----------+
I have had little success with this measure. It returns correct results but only for category total. But I want to get 8 for each category.
Median_of_sums :=
MEDIANX (
VALUES ( T[Category] ),
SUM ( T[Sales] )
)
I am not entirely sure what you are looking for, but perhaps using the SUMMARIZE function would do the trick here:
Total =
MEDIANX (
SUMMARIZE (
T,
T[category],
"Sales_Calc", SUM ( T[sales] )
),
[Sales_Calc]
)
The idea is to first summarize the information at a category level initially and then calculating the median for the summarized table. This would give the following results for the attached sample:
a 7
b 8
c 10
Total 8
If you want 8 to be reflected for all categories, you would have to use the ALL function to make sure the category context does not affect the calculation:
Total =
MEDIANX (
SUMMARIZE (
ALL ( T ),
T[category],
"Sales_Calc", SUM ( T[sales] )
),
[Sales_Calc]
)
Hope this helps.
How to construct a DAX measure which returns sum of either A or B. The logic is take B if A is empty. So expected results looks like this:
+---+---+----------+
| A | B | Expected |
+---+---+----------+
| 1 | | 1 |
| 1 | | 1 |
| | 2 | 2 |
| 1 | 2 | 1 |
| | 2 | 2 |
+---+---+----------+
| 3 | 6 | 7 |
+---+---+----------+
When I use measure:
Measure = IF(ISBLANK([SUM(tab[A])]), SUM(tab[B]), SUM(tab[A]))
I get 3 for total which is logical but not what I expect.
I'd recommend using a SUMX iterator in this case.
Measure = SUMX ( tab, IF ( ISBLANK ( tab[A] ), tab[B], tab[A] ) )
You might be able to do the following as well:
Measure =
CALCULATE ( SUM ( tab[A] ) ) +
CALCULATE ( SUM ( tab[B] ),
FILTER ( tab, ISBLANK( tab[A] ) )
)
Consider the following tables - one of printers, the other of page counts from meter readings:
Printers
+------------+---------+--------+
| Printer ID | Make | Model |
+------------+---------+--------+
| 1 | Xerox | ABC123 |
| 2 | Brother | DEF456 |
| 3 | Xerox | ABC123 |
+------------+---------+--------+
Meter Read
+-------+------------+-----------+------------+
| Index | Printer ID | Poll Date | Mono Pages |
+-------+------------+-----------+------------+
| 1 | 1 | 1/1/2019 | 1000 |
| 2 | 2 | 1/1/2019 | 800 |
| 3 | 3 | 1/1/2019 | 33000 |
| 4 | 1 | 1/2/2019 | 1100 |
| 5 | 2 | 1/2/2019 | 850 |
| 6 | 3 | 1/2/2019 | 34000 |
| 7 | 1 | 1/3/2019 | 1200 |
| 8 | 2 | 1/3/2019 | 900 |
| 9 | 3 | 1/3/2019 | 35000 |
| 10 | 1 | 1/4/2019 | 1400 |
| 11 | 2 | 1/4/2019 | 950 |
| 12 | 3 | 1/4/2019 | 36000 |
| 13 | 1 | 1/5/2019 | 1800 |
| 14 | 2 | 1/5/2019 | 1000 |
| 15 | 3 | 1/5/2019 | 36500 |
| 16 | 1 | 1/6/2019 | 2000 |
| 17 | 2 | 1/6/2019 | 1050 |
| 18 | 3 | 1/6/2019 | 37500 |
| 19 | 1 | 1/7/2019 | 2100 |
| 20 | 2 | 1/7/2019 | 1100 |
| 21 | 3 | 1/7/2019 | 39000 |
| 22 | 1 | 1/8/2019 | 2200 |
| 23 | 2 | 1/8/2019 | 1150 |
| 24 | 3 | 1/8/2019 | 40000 |
+-------+------------+-----------+------------+
In my Power BI report, I have a Dates table:
Dates = CALENDAR(DATE(2019, 1, 1), DATE(2019, 1, 31))
that I am using as a slicer. The goal is to end up with a delta of Mono Pages during the date range from the slicer. I'm able to grab the difference between each meter read with a fairly complicated calculated column on the Meter Read table:
PagesSinceLastPoll =
IF(
ISBLANK(
LOOKUPVALUE(
'Meter Read'[Mono Pages],
'Meter Read'[Index], CALCULATE(
MAX(
'Meter Read'[Index]
), FILTER(
'Meter Read',
'Meter Read'[Index] < EARLIER('Meter Read'[Index])
&& 'Meter Read'[Printer ID] = EARLIER('Meter Read'[Printer ID] )
)
)
)
),
BLANK(),
'Meter Read'[Mono Pages] -
LOOKUPVALUE(
'Meter Read'[Mono Pages],
'Meter Read'[Index], CALCULATE(
MAX(
'Meter Read'[Index]
), FILTER(
'Meter Read',
'Meter Read'[Index] < EARLIER('Meter Read'[Index])
&& 'Meter Read'[Printer ID] = EARLIER('Meter Read'[Printer ID] )
)
)
)
)
But the performance over 10,000+ rows is pretty bad. I'd like to grab the max and min values for a device in the filtered date range and just subtract instead, but I'm having a hard time getting the right value. My DAX so far keeps getting me the max value from the ENTIRE table, not the table filtered on the dates in my slicer. Everything I've tried so far is some variation on:
MaxInRange =
CALCULATE (
MAX ( 'Meter Read'[Mono Pages] ),
FILTER ( 'Meter Read', 'Meter Read'[Printer ID] = Printers[Printer ID] )
)
To summarize: If I have a slicer starting 1/2/2019 and ending 1/5/2019, the max value for Printer ID 1 should read 1800, not 2200.
Thoughts?
The calculated column can be done more efficiently like this:
PagesSinceLastPoll =
VAR PrevRow =
TOPN(1,
FILTER('Meter Read',
'Meter Read'[PrinterID] = EARLIER('Meter Read'[PrinterID]) &&
'Meter Read'[PollDate] < EARLIER('Meter Read'[PollDate])
),
'Meter Read'[PollDate]
)
RETURN 'Meter Read'[MonoPages] - SELECTCOLUMNS(PrevRow, "Pages", 'Meter Read'[MonoPages])
Using that, the number of pages between two dates can just sum this column on those dates.
If you want to skip that and go straight to a measure, try something like this:
PagesInPeriod =
VAR StartDate = FIRSTDATE(Dates[Date])
VAR EndDate = LASTDATE(Dates[Date])
RETURN
SUMX(
VALUES('Meter Read'[PrinterID]),
CALCULATE(
MAX('Meter Read'[MonoPages]),
Dates[Date] = EndDate
)
-
CALCULATE(
MAX('Meter Read'[MonoPages]),
Dates[Date] < StartDate
)
)
Note that if you use Dates[Date] = StartDate, then you'll be off. You want to calculate the max pages before your first included date.
Both of these methods should give the same result:
Alexis' measure is the correct way to handle this (my thanks!), but I made a very small edit. Since it is possible that a reading was not taken on the end date, we need to look on or before that date, else it treats the max on end date like a zero. The final code then becomes:
PagesInPeriod =
VAR StartDate = FIRSTDATE(Dates[Date])
VAR EndDate = LASTDATE(Dates[Date])
RETURN
SUMX(
VALUES('Meter Read'[PrinterID]),
CALCULATE(
MAX('Meter Read'[MonoPages]),
Dates[Date] <= EndDate
)
-
CALCULATE(
MAX('Meter Read'[MonoPages]),
Dates[Date] < StartDate
)
)
I have a table with the following headers:
Dates | Category | Value
1/1/00 | A | 100
1/1/00 | B | 200
1/2/00 | A | 300
1/2/00 | B | 100
What I would like to do is to be able to add a custom column with the daily rank as such:
Dates | Category | Value | Rank
1/1/00 | A | 100 | 1
1/1/00 | B | 200 | 2
1/2/00 | A | 300 | 2
1/2/00 | B | 100 | 1
My goal is to run calcs over the top for average rank, etc. How would I write the DAX code for this column?
Cheers
Try this as a calculated column:
Column =
VAR rankValue = 'table'[Value]
RETURN
CALCULATE (
RANK.EQ ( rankValue, 'table'[Value], ASC ),
ALLEXCEPT ( 'table', 'table'[Dates] )
)