I want to calculate % based on below formula. Its little bit tricks and I am kind of stuck and not sure how to do it. Thanks if anyone can help.
I have few records in below table which are grouped by Range
Range Count
0-10 50
10-20 12
20-30 9
30-40 0
40-50 0
50-60 1
60-70 4
70-80 45
80-90 16
90-100 7
Other 1
I want to have one more column which has the cumulative % based on sum of next row against total row count (145), something like below
Range Count Cumulative % of Range
0-10 50 34.5% (which is 50/145)
10-20 12 42.7% (which is 62/145)
20-30 9 48.9% (which is 71/145)
30-40 0 48.9% (which is 71/145)
40-50 0 48.9% (which is 71/145)
50-60 1 49.6% (which is 72/145)
60-70 4 52.4% (which is 76/145)
70-80 45 83.4% (which is 121/145)
80-90 16 94.5% (which is 137/145)
90-100 7 99.3% (which is 144/145)
Other 1 100.0% (which is 145/145)
Follow the below steps to get your answer. Please vote and accept the answer, If you find the solution helpful.
1st step - - Create an index column from your range column. I have replaced "Other" Value to 999. You can replace it to much bigger number, which is unlikely to be there in your dataset. Convert this new column into whole number
Sort Column = if(Sickness[Range] = "Other",9999,CONVERT(LEFT(Sickness[Range],SEARCH("-",Sickness[Range],1,LEN(Sickness[Range])+1)-1),INTEGER))
2nd Step - Use the below measure to get value:
Measure =
var RunningTotal = CALCULATE(SUM(Sickness[Count]),FILTER(all(Sickness),Sickness[Sort Column] <= MAX(Sickness[Sort Column])))
var totalSum = CALCULATE(SUM(Sickness[Count]),ALL())
Return
RunningTotal/totalSum
Below is the output that exactly matches your requirement.
For cumulative calculation, a ordering is always required. Anyway, if your given values in column "Range" is real - this will also work for this purpose as an ascending ordering on this filed keep data in expected order. Do this following to get your desired output.
Create the following measure-
count_percentage =
VAR total_of_count =
CALCULATE(
SUM(your_table_name[Count]),
ALL(your_table_name)
)
VAR cumulative_count =
CALCULATE(
SUM(your_table_name[Count]),
FILTER(
ALL(your_table_name),
your_table_name[Range] <= MIN(your_table_name[Range])
)
)
RETURN cumulative_count/total_of_count
Here is the final output-
Related
I have a problem with calculating measure that sums values for 3 previous periods.
Below I attach sample fact table and dict table to show problem I am facing.
date
customer
segment
value
01.01.2021
1
A
10
02.01.2021
1
A
10
03.01.2021
1
A
10
04.01.2021
1
A
10
01.01.2021
2
B
20
02.01.2021
2
B
30
03.01.2021
2
B
40
dict table:
segment
segment_desc
A
Name of A
B
Name of B
Approach I have taken:
last 3 value =
VAR DATES = DATESINPERIOD(facts[date],LASTDATE(facts[date]), -3,MONTH)
RETURN CALCULATE([sum value], DATES)
It produces correct results as long as there is at least one record for April.
When I use filter on segment_desc = 'B'
It produces result as I attached - so we see result in April equals 20, which is obviously not what I wanted. I would expect it to be 50.
Answer to the main question:
time intelligence functions like DATESINPERIOD require a proper calendar table, because they expect continuous dates, without gaps.
Answer to the follow-up question "why the measure shows value for January?"
It's a bit tricky. First, notice that LASTDATE in this filter context returns blank:
So, your DAX measure then becomes this:
last 3 value =
VAR DATES = DATESINPERIOD(facts[date], BLANK(), -3,MONTH)
RETURN CALCULATE([sum value], DATES)
Blank - 3 month does not make sense, so the way DAX resolves this: it replaces BLANK with the first (min) date in the table. In this case, it's 1/1/2021. Then it goes back 3 months from that date. As a result, the final measure is:
last 3 value =
CALCULATE([sum value], {2020-11-01, 2020-12-01, 2021-01-01 })
Since you have no data prior to 2021-01-01, the final result shows only January values.
Using PowerBI linked to two separate Access Databases.
I have two datasets containing cost estimates. The cost estimates in Dataset 1 run through 2054; the cost estimates in Dataset 2 run through 2074. I used the Append function to join the two tables together and used the Quick Measure for Running Total to create values for cumulative cost by year. I charted this measure and noticed a significant decrease between 2054 and 2055 and was able to determine that the decrease is the cumulative value for Dataset 1. Does anybody know any ways to fix this?
Roughly explained:
Dataset 1 through 2054 totals to 4.5M.
Dataset 2 through 2054 totals to 3M
Dataset 2 through 2055 totals to 3.25M
Appended Dataset through 2054 totals to 7.5M
Appended Dataset through 2055 totals 3.25M instead of the expected 7.75M
I think the issue might be caused by Dataset 1 not having a value for 2055 or after, but I'm not sure how to resolve this issue.
The measure I'm using is:
Cumulative Cost =
CALCULATE(
SUM('AppendedQuery'[Value]),
FILTER(
ALLSELECTED('AppendedQuery'[Year]),
ISONORAFTER('AppendedQuery'[Year], MAX('AppendedQuery'[Year]), DESC)
)
)
ETA: Picture to explain
Here is your Dataset 1-
Here is your Dataset 2-
Here is your final Dataset after appending Dataset 1 & 2
And finally, here is the output when you are adding column Year and Cumulative Cost to a table visual. As standard PBI behavior, this is just grouping data using column Year and and applying SUM to the column Cumulative Cost.
The calculations are simple-
2051 > 1 + 1 = 2
2052 > 2 + 2 = 4
2053 > 3 + 3 = 6
2054 > 4 + 4 = 8
2055 > 5 = 5
2056 > 6 = 6
=========================
Solution for your case:
I already said in the comments that the solution current data will be not a standard one and will consider fixed $1 per year per department. But if you are happy with this static consideration, you can apply these following steps to achieve your required output-
Step-1 Create a Custom Column as below (Adjust the table name as per yours)-
this_year_spent = IF('Dataset 3'[Cumulative Cost] = BLANK(),0,1)
Step-2 Create the following Measure-
cumulative =
VAR current_year = MIN('Dataset 3'[Year])
RETURN
CALCULATE(
SUM('Dataset 3'[this_year_spent]),
FILTER(
ALL('Dataset 3'),
'Dataset 3'[Year] <= current_year
)
)
Here is the final output-
I am trying to counts students absent if they have skipped more than 8 bell periods on the same day. I was wondering if somebody could help me out here. I tried a distinct count with a counts of bell periods greater than 6 (that is the ballpark) but it is not working. I am providing a sample table below here.
You need to count the items, but using ALLEXECPT, to count the items by only using the filter context of StudentID and Date
So for example based on your data, assuming that student 101 missed 9 bells then:
Measure =
VAR _countofmisses = CALCULATE(COUNTAX(Table1, 1), ALLEXCEPT(Table1, Table1[Student ],Table1[Date]))
RETURN
IF(_countofmisses >=7, "Missed", "OK")
Which would give the following:
You can change the COUNTAX to a SUMX and still get the same result. All it is doing is counting/summing 1 for each row of the filter condition.
If I've read it wrong and student 101 has attended all 9 bells, just change the IF >= clause to the logic you need.
Problem
I'm trying to calculate and display the maximum value of all selected rows alongside their actual values in a table in Power BI. When I try to do this with the measure MaxSelectedSales = MAXX(ALLSELECTED(FactSales), FactSales[Value]), the maximum value ends up being repeated, like this:
If I add additional dimensions to the output, even more rows appear.
What I want to see is just the selected rows in the fact table, without the blank values. (i.e., only four rows would be displayed for SaleId 1 through 4).
Does anyone know how I can achieve my goal with the data model shown below?
Details
I've configured the following model.
The DimMarket and DimSubMarket tables have two rows each, you can see their names above. The FactSales table looks like this:
SaleId
MarketId
SubMarketId
Value
IsCurrent
1
1
1
100
true
2
2
1
50
true
3
1
2
60
true
4
2
2
140
true
5
1
1
30
false
6
2
2
20
false
7
1
1
90
false
8
2
2
200
false
In the table output, I've filtered FactSales to only include rows where IsCurrent = true by setting a visual level filter.
Your max value (the measure) is a scalar value (a single value only). If you put a scalar value in a table with the other records, the value just get repeated. In general mixing scalar values and records (tables) does not really bring any benefit.
Measures like yours can be better displayed in a KPI or Multi KPI visual (normally with the year, that you get the max value per year).
If you just want to display the max value of selected rows (for example a filter in your table), use this measure:
Max Value = MAX(FactSales[Value])
This way all filter which are applied are considered in the measures calculation.
Here is a sample:
I've found a solution to my problem, but I'm slightly concerned with query performance. Although, on my current dataset, things seem to perform fairly well.
MaxSelectedSales =
MAXX(
FILTER(
SELECTCOLUMNS(
ALLSELECTED(FactSales),
"id", FactSales[SaleId],
"max", MAXX(ALLSELECTED(FactSales), FactSales[Value])
),
[id] = MAX(FactSales[SaleId])
),
[max]
)
If I understand this correctly, for every row in the output, this measure will calculate the maximum value across all selected FactSales rows, set it to a column named max and then filter the table so that only the current FactSales[SaleId] is selected. The performance hit comes from the fact that MAX needs to be executed for every row in the output and a full table scan would be done when that occurs.
Posted on behalf of the question asker
I have this table:
Id Length(m) Defect Site Date
1 10 1 y 10/1/19
2 60 0 x 09/1/19
3 30 1 y 08/1/19
4 80 1 x 07/1/19
5 20 1 x 06/1/19
I want to count the amount of defects and ids that are in the last 100m of length(sorted by date DESC), whilst maintaining the ability for this to change with additional filters. For example, what are the amount of defects for site x in the last 100m, or what are the amount of defects in the last 100m that have an ID bigger than 1.
For the question 'What are the amount of defects for site x in the last 100m', I would like the result to be 2, as the table should look like this:
Id Length(m) Length Cum. Defect Site Date
4 80 80 1 x 07/1/19
5 20 100 1 x 06/1/19
I believe the issue in creating this query so far has been that I need to create a cumulative DAX query first and then base the counting query off of that DAX query.
Also important to note that the filtering will be undertaken in PowerBI. I don't want to hardcode filters in the DAX query.
Any help is welcome.
Allwright!
I have taken a crack at this. I did assume that the id of the items(?) increments through time, so the oldest item has the lowest id.
You were correct that we need to filter the table based on the cumulative sum of the meters. So I first add a virtual column to the table (CumulativeMeters) which I can then use to filter the table on. I need to break the filter context of the ADDCOLUMNS function to sum up the hours of multiple rows.
Important is to use ALLSELECTED to keep any external filters in place. After this it is pretty straightforward to filter the table on a maximum CumulativeMeters of <= 100 meters and where the row is a defect. Counting the rows in the resulting table gives you the result you are looking for:
# Defects last 100m =
CALCULATE (
COUNTROWS ( Items ),
FILTER (
ADDCOLUMNS (
Items,
"CumulativeMeters", CALCULATE (
SUM ( Items[Length(m)] ),
FILTER (
ALLSELECTED( Items ),
Items[Date] <= EARLIER ( Items[Date] )
&& Items[Id] <= EARLIER ( Items[Id] )
)
)
),
[CumulativeMeters] <= 100
&& Items[Defect] = 1
)
)
Hope that helps,
Jan