Optimize unpivot and filter - powerbi

I am trying to make the following visualization:
Visualization
Using the following fact table called Crashes (Crash_ID, which is not shown, is the primary key, there are many more columns that have been left out):
Fact table
My approach was to first unpivot the "EA" columns from the fact table using DAX:
EA_Unpivots = UNION( SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Distracted_Driving","Counts",Crashes[EA_Distracted_Driving]), SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Impaired_Driving","Counts",Crashes[EA_Impaired_Driving]), SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Intersection_Safety","Counts",Crashes[EA_Intersection_Safety]), SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Older_Road_Users","Counts",Crashes[EA_Older_Road_Users]), SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Pedestrian_Safety","Counts",Crashes[EA_Pedestrian_Safety]), SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Roadway_and_Lane_Departures","Counts",Crashes[EA_Roadway_and_Lane_Departures]), SELECTCOLUMNS(Crashes,"Crash_ID",Crashes[Crash_ID],"EA","EA_Speeding","Counts",Crashes[EA_Speeding]) )
and then use
EA_Counts = GROUPBY(EA_Unpivots,EA_Unpivots[EA],"EA_Count",SUMX(CURRENTGROUP(),EA_Unpivots[Counts]))
to setup the table needed to produce the visualization.
However, the drawback of this approach is that the visualization will not react to different filters being applied dynamically on the dashboard since EA_Counts does not have Crash_ID as a column anymore, and the filters indirectly operate on Crash_ID by selecting different attributes of each crash from the fact table.
Because of this, I noticed that EA_Counts was unnecessary and I could get the visualization by just creating a relationship between the fact table, Crashes, and EA_Unpivots on Crash_ID:
data model
and then setting up the visualization like this:
visualization
Here is my question: Is there a way to achieve the same result without creating EA_Unpivots? The reason is that EA_Unpivots is very large and blows up the file size. It seems there must be a more efficient way to achieve this. Thanks.

Related

PowerBI: two filtered location sets in one map

I have two sets of identical data that I filter differently. One shows sales by location in test locations and the other in control locations. Is there a way to append the results in a table with a "Test/Control" flag based on the first set of slicers so that I can show all the locations color coded by the flag?
You have two options to achieve this. In the Model (DAX) you can create a calculated Table, and use the UNION function to append the two sets of rows together.
https://dax.guide/union/
However UNION is quite fussy - the two parameter tables must have the same set of columns. Sometimes you can overcome small differences by adding other functions, but complex transforms are harder and you cant debug.
For complex requirements, you can use the Power Query Editor - it has an Append Query button on the Home Ribbon. Each query you feed in can have complex transformations.

Is there a way that POWERBI does not agregate all numeric data?

so, I got 3 xlsx full of data already treated, so I pretty much just got to display the data using the graphs. The problem seems to be, that Powerbi aggregates all numeric data (using: count, sum, etc.) In their community they suggest to create new measures, the thing is, in that case I HAVE TO CREATE A LOT OF MEASURES...Also, I tried to convert the data to text and even so, Powerbi counts it!!!
any help, pls?
There are several ways to tackle this:
When you pull a field into the field well for a visualisation, you can click the drop down in the field well and select "Don't summarize"
in the data model, select the column and on the ribbon select "don't summarize" as the summarization option in the Properties group.
The screenshot shows the field well option on the left and the data model options on the right, one for a numeric and one for a text field.
And, yes, you never want to use the implicit measures, i.e. the automatic calculations that Power BI creates. If you want to keep on top of what is being calculated, create your own measures, and yes, there will be many.
Edit: If by "aggregating" you are referring to the fact that text values will be grouped in a table (you don't see any duplicates), then you need to add a column with unique values to the table so all the duplicates of the text values show up. This can be done in the data source by adding an Index column, then using that Index column in the table and setting it to a very narrow with to make it invisible.

PowerBI Query Performance

I have a PowerBI report that has a few different pages display different visuals. The report uses the same table of data (lets call it Jobs).
The previous author of this report has created two queries in the data section that read off this base table of data, but apply different transformations and filters to the underlying data. Then, the visuals use either of these models to display their data. For example, the first one applies a filter to exclude certain columns based off a status field and the other applies a different filter, and performs transformations on some of the columns
When I manually refresh the report, it looks like the report is retrieving data for both of these queries, even though the base data is the same. Since the dataset is quite large, I am worried that this report has been built inefficiently but I am not sure if there is a better way of doing this.
TL;DR; The Source and Navigation of both of queries is exactly the same - is this retrieving the data twice and causing my report to be inefficient, and if so, what is the approrpiate way to achieve what I am trying to do?
PowerBi will try to parallelize as much as possible. If you have two queries that read from the same table then two queries will be executed.
To avoid this you can:
create a query which only gets the necessary data from the table.
Set this table not to be loaded in the model (toggle "Enable Load")
Every other table that starts from this table won't be a clone of this but will reference it.
In this way, the data will be fetched once from the source and then used to create other tables using PowerQuery.

PowerBI / PowerPivot - Data not aggregating by time frame

I have created a powerpivot model include in the image below. I am trying to include the "IncurredLoss" value and have it sliced by time. Written Premium is in the fact table and is displaying correctly. I am aiming for IncurredLoss to display in a similar fashion
I have tried the following solutions:
Add new related column: Related(LossSummary[IncurredLoss]). Result: No data
DAX Summary Measure: =CALCULATE(SUM(LossSummary[IncurredLoss])). Result: Sum of everything in LossSummary[IncurredLoss] (not time sliced)
Simply adding the Incurred Loss column to the Pivot Table panel. Result: Sum of everything in LossSummary[IncurredLoss] (not time sliced)
A few other notes:
LossKey joins LossSummary to PolicyPremiumFact
Reportdate joins PolicyPremiumFact to the Calendar.
There is 1 row in LossSummary per date and Policy. LossKey contains this information and is the PK on that table.
Any ideas, clarifications or pointers are most certainly welcome. Thank you!
The related column should work. I was able to get it to work in both Excel 2016 and Power BI Desktop. Rather than bombarding you with questions, I'll try and walk through how I would troubleshoot further, in the hopes it gets you to a solution faster:
First, check the PolicyPremiumFact table inside Power Pivot and see if the IncurredLossRelated field is blank or not. If it is consistently blank, then the related column isn't working. The primary reason the related column wouldn't work is if there's a problem with your relationships. Things I would check:
Ensure that the relationships are between the fields you think they are between (i.e. you didn't accidentally join LossKey in one table to a different field in the other table)
Ensure that the joined fields contain the same data (i.e. you didn't call a field LossKey, but in fact, it isn't the LossKey at all)
Ensure that the joined fields are the same data type in Power Pivot (this is most common with dates: e.g. joining a text field that looks like a date to an actual date field may work, but not act as expected)
If none of the above are the problem, it doesn't hurt to walk through your data for a given date in Power Pivot. E.g. filter your PolicyPremiumFact table to a specific date and look at the LossKeys. Then go the LossSummary table and filter to those LossKeys. Stepping through like this might reveal an oversight (e.g. maybe the LossKeys weren't fully loaded into your model).
If none of the above reveals anything, or if the related column is not blank inside Power Pivot, my suggestion would be to try a newer version of Excel (e.g. Excel 2016), or the most recent version of Power BI Desktop.
If the issue still occurs in the most recent version of Excel/Power BI Desktop, then there's something else going on with your data model that's impacting the RELATED calculation. If that's the case, it would be very helpful if you could mock up your file with sample data that reproduces the problem and share it.
One final suggestion I have is to consider restructuring your tables before they arrive in your data model. In your case, I'd recommend restructuring PolicyPremiumFact to include all the facts from LossSummary, rather than having a separate table joined to your primary fact table. This is what you're doing with the RELATED field to some extent, but it's cleaner to do before or as your data is imported into Power Pivot (e.g. using SQL or Power Query) rather than in DAX.
Hope some of this helps.

Power Query Formula Language - Detect type of columns

In Power BI, I've got some query tables generated from imported data. All the data comes in as type 'Any', and I'm trying to automatically detect the type of the data in each column.
Some of the queries generate tables with columns based on the in-coming data - I don't know what the columns are going to be until the query runs and sets up the table (data comes from an Azure blob). As I will have quite a few tables to maintain, which columns can change (possibly new columns being added) with any data refresh, it would be unmanageable to go through all of them each time and press 'Detect Data Type' on the columns.
So I'm trying to figure out how I can do a 'Detect Data Type' in the query formula language to attach to the end of the query that generates the table columns. I've tried grabbing the first entry in a column and do Value.Type(column{0}), however this seems to come out as 'Text' for a column which has integers in it. Pressing 'Detect Data Type' does however correctly identifies the type as 'Whole Number'.
Does anyone know how to detect a column's entry types?
P.S. I'm not too worried about a column possibly holding values of different data types
You seem to have multiple issues here. And your solution will be fragile, there's a better way. But let's first deal with column type detection. Power Query uses the 'any' data type as it's go to data type. You can write a function that samples the rows of a column in a table does a best match data type detection then explicitly sets the data type of the column. This is probably messy and tricky since you need to do it once per column. This might be workable for a fixed schema but for a dynamic schema you'll run into a couple of things very quickly. First you'll need to write some crazy PQ code to list all the columns and run you function on each. This will work the first time, but might break in subsequent refreshes because data model changes are not allowed during refresh. If you're using a tool like Power BI Desktop, you'll be able to fix things up. If you publish your report to the Power BI service, you'll just see refresh errors.
Dynamic Schemas will suffer the same data model change issue I mentioned above.
The alternate solution that you won't have problems with is using a Direct Query data source instead of using Power Query. If you load your data into Azure SQL or a Tabular Model, the reporting layer will get the updated fields automatically so you don't have to try to work around using PQ.