I'm kind of new to PowerBi, but I haven't been able to find an answer to my problem.
I have this kind of dataset:
Timestamp ErrorType Duration (ms)
16/05/10 8:00 3 100
16/05/10 8:00 4 1000
17/05/10 10:00 3 100
18/05/10 8:00 3 200
18/05/10 10:00 4 200
18/05/10 10:00 5 50
19/05/10 10:00 5 10
19/05/10 10:00 5 10
The names are hopefully pretty self explanatory: Timestamp is the time at which the issue occured, the ErrorType is a code to know what kind of error that is, and the Duration indicates how long the issue lasted. What I'd like to do is essentially make a measure that would give me the rank of the error type, taking into account any filters I could use on the page.
For example, should I have restricted myself to the time period 17/05 to 19/05, the measure for 3 would give me 1, and the measure for 4 would give me 2, whereas that would be the opposite should I sample over all the time scale. In the all cases, the measure for 5 would give me 3. The first case is illustrated in the table down below:
ErrorType Rank
3 1
4 2
5 3
The idea behind this is to be able to color code a graph using their importance in the specified time frame. For example, for error that lasted the longest (cummulative) on the specified time frame, then it shall be colored red; orange for the second; and all that for the first eight. That part I know how to do.
I've already tried something, but it just won't work, for some reason it says that it can't find the column called "Total" in the Table I've just created...
Color =
VAR TotalTime = CALCULATE(
SUM('AuFilDeLEau'[Duration]),
ALLSELECTED('AuFilDeLEau'[ErrorCode])
)
VAR ColumnTotalTime = (SUMMARIZE(Table, Table[Duration], "Total", SUM(Table[Duration])))
VAR RankforColor = (RANKX(ColumnTotalTime, [Total],TotalTime))
return RankforColor
Currently, this only gives me back 1 for each and every ErrorType.
I hope my issue is clear, and you'll be able to help me, thanks in advance ^^
EDIT: tried smpa01's solution, didn't work, it did this:
EDIT: tried smpa01's second solution, it worked. Marked as solved. Thanks !
_rank =
VAR _1 =
MAX ( 'Table'[ErrorType] )
VAR _2 =
RANKX (
FILTER ( ALLSELECTED ( 'Table' ), 'Table'[ErrorType] = _1 ),
CALCULATE ( MAX ( 'Table'[Duration] ) ),
,
ASC,
DENSE
)
RETURN
_2
EDIT : Adding this portion after revised data
_revisedRank =
IF (
HASONEVALUE ( 'Table'[ErrorType] ) = TRUE (),
RANKX (
ALL ( 'Table'[ErrorType] ),
CALCULATE ( SUM ( 'Table'[Duration (ms)] ) ),
,
DESC
)
)
Related
I'm trying to see total number of team members who registered their hours spent on their projects in previous month (June in this case). The Engmt table is:
With the measure I wrote below filters foe June however doesn't distinctly count the names (includes the team member 1 and 4 twice). The output I get is 6, however, is supposed to be 5.
currentMonth =
SUMX(
FILTER(
'Engmt',DATE (YEAR('Engmt'[Month]), MONTH('Engmt'[Month]), 1)
= DATE ( YEAR (TODAY()), MONTH(TODAY())-1,1)
),
CALCULATE(DISTINCTCOUNT('Engmt'[Name])
))
CurrentMonth = distinctcount(Engmt[Name])
Use the filter with Relative Date...
Sample File
don't panic ). Lets understand whats going on. you get 6 and it is a unique names qty in all your table. So, your filter doesn't works.
currentMonth =
SUMX(
FILTER(
'Engmt'
,DATE(
YEAR('Engmt'[Month]) -- how can you get a year from this?
-- it's hard to say, but it's not possible, sorry.
,MONTH('Engmt'[Month])
,1
) = DATE(
YEAR (TODAY())
,MONTH(TODAY())-1
,1
)
)
,CALCULATE(DISTINCTCOUNT('Engmt'[Name]))
)
But you are close to get the right result. Ok, you have only month values in your table. Lets use it.
currentMonth =
CALCULATE(
DISTINCTCOUNT('Engmt'[Name])
,'Engmt'[Month]=FORMAT(
DATE(
YEAR (TODAY())
,MONTH(TODAY())-1
,1
)
,"MMMM"
)
)
It can not work due to different reasons, for example: a space after month name in the table or smthg else. But, most probably it will. Anyhow it's less tricky to use a slicer. It will help to filter the table in proper way, then DISTINCTCOUNT('Engmt'[Name]) is enough.
I have a quite peculiar problem. I have a column with values that represents the state of Inventory for each Site (category).
Which means that the most recent one value for each month is always last day per each site per month.
Example
for site 667 for november its going to be value 5 252 235.74 (31/12/2021) but for site 200 its going to be 79 967 894.18 (30/12/2021)
he sum of those values should be 85 220 129.92 which is state of inventory for those two sites per december.
I was able to calculate this with this measure
Inventory Cost =
VAR _pretable =
ADDCOLUMNS (
SUMMARIZE (
v_factinventorytransactions,
v_dimdate[DateId],
v_factinventorytransactions[SiteId]
),
"InventoryCost", CALCULATE ( AVERAGE ( v_factinventorytransactions[RunningCost] ) )
)
VAR _table =
FILTER (
_pretable,
VAR _MaxDate =
CALCULATE (
MAX ( v_factinventorytransactions[InventoryTransactionDateId] ),
ALLSELECTED ( v_dimdate[DateId] )
)
RETURN
v_dimdate[DateId] = _MaxDate
)
RETURN
SUMX ( _table, [InventoryCost] )
Which works perfectly but I'm wondering if it can be simplyfied. I want it to simplify, because when I want to use this measure inside another one that sums those Inventory Cost values per month for last 3 months and I have wrong answers.
Which means that this Inventory Cost measure works but if I call out this measure in the one below it shows wrong numbers (but other measures, more simply ones work).
Rolling3Months =
VAR _EndDate = MAX(v_dimdate[Date])
VAR _Dates = DATESINPERIOD(v_dimdate[Date], _EndDate, -3, MONTH)
VAR _Cost = [Inventory Cost]
VAR _Inventory = SUMX(_Dates, CALCULATE(_Cost, ALL(v_dimdate[YearMonth])))
RETURN
_Inventory
I'm a little bit stuck and would be super appreciated when someone would pointed out my mistakes/errors here.
I'm also providing sample power BI file with those.
https://wetransfer.com/downloads/cabd5902e1491b6874064a0deb26d0ae20220614185839/dac3e16acdce4c2b707e92bd56a04d1020220614185904/77a258
Thank you
I have a big data set with the structure as shown below.
Operation
User
Timestamp
Elapsed time
12
1
2018-01-03
11:19:02 AM
12
1
2018-01-03
12:34:02 PM
12
1
2018-01-04
8:34:02 AM
12
2
2018-02-03
9:34:02 AM
12
2
2018-02-03
11:12:42 AM
12
3
2018-02-03
12:12:00 PM
15
1
2018-01-02
9:22:32 AM
15
1
2018-01-02
9:25:32 AM
15
2
2018-01-02
9:25:32 AM
The goal is to form the column "Elapsed Time" using DAX and PowerBI. The column shows the difference/duration between the current timestamp and previous timestamp for the same user and the same operation.
I've tried something along the lines of:
Elapsed time =
DATEDIFF (
CALCULATE (
MAX ( data[Timestamp] ),
ALLEXCEPT ( data, data[Operation], data[User] ),
data[Timestamp] < EARLIER ( data[Timestamp] )
),
data[Timestamp],
MINUTE
)
`
But it complains about a single value for column 'Timestamp' in table 'data' cannot be determined. this can happen when a measure formula refers to a column that contains many values without specifying an aggregator such as min, max, count, or sum to get a single result.
I'm very new to DAX, so I'd appreciate any help.
Since the 'Table'[Operation] and 'Table'[User] of the current row are to be used as filter, a very simple approach might just use CALCULATE to trigger the context transition, transforming the current row context to the corresponding filter context, and then to replace the filer over 'Table'[Timestamp] to be less than the current Timestamp, previously saved to a variable. The context transition automatically sets the correct filters over 'Table'[Operation] and 'Table'[User]
Elapsed time =
VAR CurrentTimestamp = 'Table'[TimeStamp]
RETURN
DATEDIFF (
CALCULATE ( MAX ( 'Table'[Timestamp] ), 'Table'[Timestamp] < CurrentTimestamp ),
CurrentTimestamp,
MINUTE
)
Typing on the mobile, so apologies for possible errors. Assuming this is a calculated column:
Elapsed time =
DATEDIFF (
CALCULATE (
MAX ( Table[Timestamp] ),
FILTER (
Table,
Table[User] = EARLIER ( Table[User] )
&& Table[Operation] = EARLIER ( Table[Operation] )
&& Table[Timestamp] < EARLIER ( Table[Timestamp] )
)
),
Table[Timestamp],
MINUTE
)
Where Table is your table name.
There surely are now ways to do that, do apologies for non optimal approach.
New to PowerBI, so forgive me for the description here. I'm working with a dataset of retail headcount sensors, which gives me a table of locations, timestamps, and a count of shoppers:
Room TimeStamp Count_In
123 3/13/2019 8
456 4/4/2019 9
123 3/28/2019 11
123 3/18/2019 11
456 3/22/2019 3
etc...
I'm trying to calculate a running total for each "room" over time. The overall running total column is easy:
C_In =
CALCULATE (
SUM ( Sheet1[In] ),
ALL ( Sheet1 ),
Sheet1[Time] <= EARLIER ( Sheet1[Time] )
)
But I'm unable to figure out how to add that second filter, making sure that I'm only summing for each distinct location. Help is appreciated!
Your ALL function removes all context on Sheet1, try using ALLEXCEPT to keep the row context of the Room.
C_In =
CALCULATE (
SUM ( Sheet1[In] ),
ALLEXCEPT ( Sheet1, Sheet1[Room] ),
Sheet1[Time] <= EARLIER ( Sheet1[Time] )
)
I need some help creating a measure in PowerPivot. I have Googled and tried all the options I could find with success. I have a Fact table with sales leads. Some of the leads gave us a sell some not. I need to measure the value of the leads. I sum the values and divide it with the number of records I have in the table
Average Total of Leads:=calculate(Table1[Sum of Value]/Table1[Count of Lead name])
My problem is to create the measure which give me 3 months rolling average.
I have tried:
Roll3Average:=[Average Value of leads]/CALCULATE(DISTINCTCOUNT(dimdate[MonthName]),
DATESINPERIOD(dimdate[Dates],
LASTDATE(dimdate[Dates]),-3,Month
)
)
I have tried:
Rolling3Average:=IF(COUNTROWS(VALUES(dimdate[MonthName])) = 1,
CALCULATE(
[Average of Value]/ COUNTROWS(VALUES(dimdate[MonthName] ) ) ,
DATESBETWEEN(
dimdate[Dates],
FIRSTDATE(PARALLELPERIOD(dimdate[dates], -2, MONTH)),
LASTDATE(PARALLELPERIOD(dimdate[dates], 0, MONTH))
), ALL(DimDate)
)
)
I have tried:
Total Sales rolling:=
CALCULATE (averagex(Table1,[Average Total of deals]),
FILTER (ALL ( dimdate),dimdate[Month] >= MAX (dimdate[Month]) -2&& dimdate[Month] <= MAX ( DimDate[Month])))
I cannot get it right.
I hope someone can see where I go wrong.
#Marcus
Click here and see my datamodel Thanks
I have still troubles with my data mode.
I have linked a very simplified example. I hope someone can help me.
Thank you
Note in the example I am using, I am using [Sales] instead of leads.
The main structural change you will want to make is to create a month_index in your dimDates table. The advantage of have that field is that it makes calculating the total over 3 months easier, since it removes having to handle cross year like in your 2nd example. The other advantage is having a month_index will handle non standard calendars, e.g. 4-4-5
To start with:
Sales:=SUM(Data[Qty])
The next part would be to calculate the sales(or leads) over the 3 months. The below we are using the month_index field to quickly define the date range in which we want to sum.
3_Month_Sales :=
CALCULATE (
[Sales],
FILTER (
ALL ( dimdate ),
dimdate[Month_index] <= MAX ( dimdate[Month_index] )
&& dimdate[Month_index]
>= MAX ( dimdate[Month_index] ) - 2
)
)
The next part depends on the ask, since the average could be calculated two ways. The main question is it a 3 month average, or is it based on the numerator being the number of months that have a value greater then 0.
The simple way:
3_Month_Average:=DIVIDE( [3_Month_Sales], 3)
The more complex way, in which I learned you can wrap SUMX in a calculate. The idea being the calculate is looking at the period for your 3 months, and then the sumx is iterating down by year then Month. At the end it is checking if sales is greater then 0, if it then 1 is assigned. Then those 1's are summed
Count_of_Periods :=
CALCULATE (
SUMX (
VALUES ( dimdate[Year] ),
SUMX ( VALUES ( dimdate[Month] ), IF ( [Sales] > 0, 1 ) )
),
FILTER (
ALL ( dimdate ),
dimdate[Month_index] <= MAX ( dimdate[Month_index] )
&& dimdate[Month_index]
>= MAX ( dimdate[Month_index] ) - 2
)
)
And then finally
3_Month_Alternative:=DIVIDE([3_Month_Sales], [Count_of_Periods])
Below would be an image using some random sample data, and how the different fields interact. As part of the example the April 2017 data was removed, to show how the count_of_periods calculation was able to handle the fact there was no data in that period