I have a table of number of visual:
visual_1000x123
0
1
I would like to use a dropdown list to let people select 0 or 1.
If 1 is selected, the calculate result will be xxx(let's say 55k).
If 0 is selected, the calculate will be 0.
Maybe I should mention that there is a time filter, which works well.
The result varies according to the selected period.
This measure keeps giving me 0:
measure = IF(MAX('Simulateur'[visual_1000x123])=0, 0,
CALCULATE(SUM('Prévision Leaderboard'[Budget_impressions/1000x123_et_1000x167])) +
CALCULATE(SUM('Prévision Middleboard'[Budget_impressions/1000x123_et_1000x167])))
This measure keeps return 55k:
measure = IF(MAX('Simulateur'[visual_1000x123])=1, 0,
CALCULATE(SUM('Prévision Leaderboard'[Budget_impressions/1000x123_et_1000x167]))+CALCULATE(SUM('Prévision Middleboard'[Budget_impressions/1000x123_et_1000x167])))
I also tried to multiply the number of visual, but it didn't return anything:
measure =
(CALCULATE(SUM('Prévision Leaderboard'[Budget_impressions/1000x123_et_1000x167])) +
CALCULATE(SUM('Prévision Middleboard'[Budget_impressions/1000x123_et_1000x167])))*MAX('Simulateur'[1000x123])
If I am understanding your question correctly, you just want to multiply a number in a column by a factor (which is in a slicer, or visual).
If that is the case you can do it simply as follows:
column1 * factor result = AVERAGE('Table'[Column1]) * AVERAGE('Table (2)'[factor])
Posting this answer myself.
It's due to my data structure. All the data was in one table and I should have them seperated into different tables.
I think I have just been staring at this for too long and have worked myself into a corner.
So let's say I have the following data:
I have locations that get monthly utility usage numbers. I want to create a clustered bar chart that shows how many months have data.
The "Merge_Use" column can have numbers, blanks, and N/A. Any number > 0 OR N/A is considered complete. 0 or blank is incomplete.
I want a clustered bar chart that shows % complete, and is split by quarter and metric type, that shows the global total % complete, but can be filtered to show the % complete by region or individual location (relationships for TRT_ID to region is housed in a separate table). For some reason I can't wrap my mind around the measure that would do that.
This was my first try. I used a calculated column, but it wasn't until after I got to the visual stage that I realized that my calculated column is static and won't be affected by filtering. (It sounds silly now but I made a column that assigned each completed field a % out of the total fields, i.e. 1/total # rows, thinking I could just sum these together in the visual).
How would you do this?
I may have solved my own problem.
I added a conditional column with Yes/No for "completed" based off my criteria. (i.e. if "" then "No", else "Yes")
Then added the following measure:
% YES =
DIVIDE (
CALCULATE ( COUNT ( Leadership_Usage_Tracking_v2[Completed] ), Leadership_Usage_Tracking_v2[Completed] = "YES" ),
CALCULATE ( COUNT ( Leadership_Usage_Tracking_v2[Completed] ), ALLSELECTED ( Leadership_Usage_Tracking_v2[Completed] ) )
)
Seems to be working so far!
I have a simple CSV data set such as this.
ID,MainCategory,SubCategory,Type,Value
1,E,E1,Demo,5
2,N,N3,Install,2
3,E,E1,Demo,4
4,E,E2,Install,7
5,D,D1,Install,3
6,S,S2,PM,4
7,N,N2,Install,7
8,N,N2,Demo,1
9,E,E2,Demo,2
10,D,D2,Install,6
11,D,D3,PM,4
12,S,S1,PM,8
13,N,N1,Install,5
14,S,S3,Install,8
15,S,S1,Demo,9
16,E,E3,Demo,5
17,N,N2,Install,3
18,E,E2,PM,6
19,D,D2,PM,6
20,N,N3,Demo,6
21,S,S2,Demo,7
22,E,E3,Install,2
23,S,S1,Install,4
24,S,S2,PM,8
25,D,D1,Install,5
In my Power BI Desktop, I'd like to load this into a table, and conditionally format the Value column based on whether the value in each row is greater than or less than the average for the currently selected data set.
For instance, the average of Value considering the entire table is 5.08, so if there are no filters applied (as in, all my slicers are set to select nothing), I'd like all rows whose Value is 6 or more to be background colored in one color, and the others in another color. For this, I created two measures like so:
AvgOfVal = DIVIDE( SUM(G2G[Value]), COUNTA(G2G[ID]) )
BGColor = IF(SUM(G2G[Value]) > [AvgOfVal], "Light Pink", "Light Blue")
Then I tried to apply the BGColor measure for conditionally formatting the background, but this doesn't work as expected, and instead produces the result below.
I realize that this is due to the fact that the measure is calculated per row, so when conditional formatting is applied, as seen in the AvgOfVal column in the table, it calculates average per row instead of for the entire data set. How can I calculate a measure that takes into account the entire data set (considering slicers), and do the conditional formatting as I need.
Please keep in mind that if a user were to select a slicer filter (say, MainCategory = D), then I want the conditional formatting to reflect this. So in this case, given that AvgOfVal = 4.80 for MainCategory = D entries, I'd like all rows whose Value >= 5 to be in one color, and others in another color.
I realize that this is due to the fact that the measure is calculated per row
Yes. The key is understanding how that happens. When the measure is calculated a "context transition" happens and the current row is added to the filter context.
So what you want is a calculation that removes the row filter that was added in the context transition. So you need ALLSELECTED(), which does precisely that. eg
AvgOvVAl = CALCULATE( AVERAGE('data'[Value]), ALLSELECTED() )
Removing the "innermost" filter which in this case is the filter on the row, but leaving all other filters, ie filters added on the report, page, visual, or filters coming from interactions with other visuals like slicers.
The Films table looks like this
There is a ComfortBreaks table looking like image
In the Films table I need to create a calculated column called NumberBreaks which shows for each film the number of breaks needed. To do this I have to pick out the value of the Breaks column where:
The value of the lower limit in the ComfortBreaks table is less than or equal to this film's running time in minutes
and
The value of the upper limit in the ComfortBreaks table is greater than this film's running time in minutes.
the result should look like the image below
There cannot be a relationship between the two tables. so this has to be done without creating relationship between them.
I tried lookup function but it showed error:A table of multiple values was supplied where a single value was expected.
You can use this below code for your custom column. Considering
number_of_breaks =
VAR current_row_run_time_minutes = Films[RunTimeMinutes]
RETURN
MINX (
FILTER(
ComfortBreaks,
ComfortBreaks[LowerLimit] <= current_row_run_time_minutes
&& ComfortBreaks[UperLimit] > MonthNumber
),
ComfortBreaks[Breaks]
)
You can perform your further average calculation on the new custom column now.
I’m trying to compute the difference between (budget) allocations and expenditures.
At first glance, this seems easy to do with a simple measure:
Balance = SUM(Register[AllocationAmount]) – SUM(Register[TransactionAmount])
However, there’s an extra rule to factor in: Transaction amounts dated prior to the first allocation should be ignored. Measure Balance gets more complex to accommodate this:
Balance = CALCULATE(
SUM(Register[AllocationAmount]) - SUM(Register[TransactionAmount]),
FILTER(Dates,
Dates[Date] >= FIRSTDATE(Category[InitialAllocationDate])
&& NOT ISBLANK(FIRSTDATE(Category[InitialAllocationDate]))
)
)
Unfortunately, Balance doesn't add up correctly when used to compute totals up a hierarchy in the Category table. This makes sense. When Balance is evaluated up the hierarchy, FIRSTDATE(Category[InitialAllocationDate]) in the date filter pulls the first date out of all dates associated with all Category rows in context at that higher level and then uses this date to apply the filter once for the entire grouping.
This is incorrect. In order to produce an accurate total, the filter needs to be applied on a per-Category basis so that filtering factors in each category's InitialAllocationDate. To achieve this, I changed Balance so that it computes only when a single Category[Name] filter is applied (i.e. no hierarchy grouping is taking place). Then, I created a separate measure that uses SUMX to evaluate Balance on a per-Category row basis.
Balance = IF (HASONEFILTER(Category[Name])),
CALCULATE(
SUM(Register[AllocationAmount]) - SUM(Register[TransactionAmount]),
FILTER(Dates,
Dates[Date] >= FIRSTDATE(Category[InitialAllocationDate])
&& NOT ISBLANK(FIRSTDATE(Category[InitialAllocationDate]))
)
)
)
CumulativeBalance = SUMX(Grouping, Register[Balance])
This produces correct totals up Category's hierarchy (yay!); however, it requires two measures.
Question
Is there a way I can eliminate the second measure (CumulativeBalance) and instead somehow adjust Balance so that it computes correctly up the hierarchy?