Creating a "since last refresh" KPI? - powerbi

I have a PowerBI that pulls from an excel spreadsheet a current inventory of statuses of a system, lets make it easy and say I have a single measure that reads "40% complete".
If I refresh the PowerBI dataset and it now says "60%", is there any way to have a KPI automatically show +20%? Every example I've found requires you to have another dataset that keeps the historical data, and that's not really an option in this situation. Is there any way to calculate it or store it within the PowerBI query itself?

Power BI is not designed to store historical data. This is what a database is for.
In order to calculate that 20% difference, you need to store historical data somewhere but Power BI's purpose is to connect to sources and load data and then visualize it, not to act as a data repository.

Related

Power BI analyse tabel in Excel export

I have a question about the function "Analyse in Excel" or "Analyse in Excel" in German when a PBI (Power BI) report has been published.
I read in a flat table in PBI and create some measures in PBI. Basically, it's about account numbers and the limits. A calculation is not necessary or possible here.
If I now want to analyse the data in Excel Pivot Table, I can only display the measures as values. An analysis of account numbers and limits is not possible, as limits are not measures.
What do I have to do to be able to select original data as values?
Thank you very much for your feedback and best regards
Andi
Try adding a measure from the table you are wanting to analyze and then double clicking on the measure value. This will pop open a new sheet and drillthrough to the rows detail behind that cell. It may give you the detail you are wanting. I also believe it will give you proper data types on columns so you can do Excel analysis.
Sorry! I do not get it.
To make it clear - I stripped down a very easy example of my problem:
I'm loading a flat file with account, currency, date and balance information.
The respective Power BI looks like:
After publishing the report into the cloud I would analyse the data within Excel
However, when I try to bring the "balance" information as value in, I'm receiving the following message:
The balance is not a measure in Power BI. Any idea what I can do?
Thank you and best regards
Andi

Creating reports for each unique value in column in PowerBI

I am trying to find an answer if it possible to automate creating powerBI reports for each unique value in one of columns(It's like filtering on whole report for one of the values and publish report than change value to next one and repeat steps for other values)? Is there any fast way to do it? I wrote program to filtering via link and clicking mouse than save links for each person to excel but i wonder if there is more reliable and faster way to do it. I am using PowerBI premium for user.
This is typically called "Report Bursting", or "Data Driven Subscriptions", and here's a walkthrough of how to do it with Power Automate and Power BI.
Why don't you leverage RLS instead of hardcoding filters?

Can you replace fields from query with fields from a split query in Power BI?

I have a report in Power BI that cannot refresh because the data from the table is too large:
The amount of data on the gateway client has exceeded the limit for a single table. Please consider reducing the use of highly repetitive strings values through normalized keys, removing unused columns, or upgrading to Power BI Premium
I have tried to shrink the columns used in the data set to the best of my ability, but it is still too large to refresh. I did a test where, instead of using just a single query to retrieve the data, I made two queries that split the columns roughly half and half and then link them back together in Power BI using their ID column. It looked to me that the test data refresh started working upon splitting up the table's data into two separate queries.
Please correct me if there is a better method to trim the data down to allow the data set to refresh, but for now this is the best solution I see. What I am wondering is, since now my data is split into two separate queries, what is the best way to adapt the already existing visualizations I have that are linked up to the full, non-refreshable query to the split, refreshable queries? It looks to me like I would have to recreate the visuals from scratch, but if there is a way to simply do a mass replace of the fields that would save so much time. The split queries I created both have the same fields as the non-split query.

What is difference between edit performed in query edit vs during modelling?

When I get data into Power BI I can edit the query as well as perform edit to the model.
What is difference between edit performed in query edit vs during modelling?
When you edit the query, you use Power Query, with its own Query Editor user interface. The steps you apply are recorded in the "M" language. Use Power Query to extract, transform, and finally load data into the Data Model.
Once the data is in the Data Model, you use DAX to create measures that you use in visuals. You can also use DAX to add more columns or even tables to the data model.
Whether to use Power Query or DAX to add columns or tables to the data model depends on a variety of factors. Some things are dead easy to do in Power Query, but harder to achieve with DAX, and vice versa. If you create a column with a formula that depends on a DAX measure, then you can only do that with DAX, because Power Query is not aware of the measures that are created after the load into the data model.
Power Query is very powerful, but the M code syntax is very different to the Excel formula syntax, or the VBA macro language. Learning to write advanced M code can be quite challenging.
DAX, on the other hand, behaves very similar to Excel formulas. Many Excel functions can even be used in DAX verbatim. If you know Excel, you've already got a head start on DAX and you can ease your way into it by learning additional functions and then expanding into more complex formulas.
The latter is probably the reason why many data manipulations are done in DAX, even though they could as well have been done in Power Query.
There are also some efficiencies with data storage and performance. Power Query makes use of query folding with SQL queries, for example, where its transformations are actually performed at the data source, i.e. on the SQL server side, and not in desktop client, and only the final query result is transferred to the desktop client.
Edit after comment: When the data is loaded into the data model, an algorithm processes the data and sorts it in a way that is most efficient for maximum compression and minimum storage. I don't have any concreate examples, but adding a column in Power Query will result in a smaller footprint than adding the same column with DAX. Read more about the compression algorithm VertiPaq here: https://towardsdatascience.com/inside-vertipaq-in-power-bi-compress-for-success-68b888d9d463
But apart from that, it mainly comes down to personal preference based on skill and experience.
By the way, many of your questions can be answered by reading through the Microsoft documentation, e.g. https://learn.microsoft.com/en-us/power-bi/guidance/import-modeling-data-reduction

PowerBI reports run slowly in DirectQuery mode

I have a powerbi report for finance. Users need to see the latest data in real-time, so I have to choose DirectQuery. But in this mode, some functions, such as DateAdd and DatesMtd, cannot be used
(This DAX function is not supported for use in DirectQuery mode.),
So I need to write a very complex SQL statement to achieve the equivalent effect, but this makes the report very slow (more than 10 seconds) every time it runs, and the largest table in my data model is less than 80000 rows. I've tried to optimize the SQL statements, but it doesn't help. Any solution?
(I use powerbi report server with sqlserver enterprise version)
Of course, with no information, I can't know what's taking so much time, but in order to understand what's happening you can use the following tools:
PowerBI Performance Analyzer: This will tell you what part is taking the most time. for more info see MSDocs & SQLBI
Check the datamodel and the storage mode of each table involved (ie: fact table, calendar, customer, etc). When querying the source, PBI won't use filters (directly in the query) that come from tables in import mode. (search for "composite models" on the web)
Limit the number of objects, for each object in the dashboard a query will be sent to the datasource, limiting the number of objects might help. (remember that objects wait for each other, so one slow loading object might cause your problem)
(even if you probably already did it) Have a look at the query execution plan, you can also check it for queries automatically created by PowerBI by capturing them (the easiest way is to use SQL Server Profiler)
I think that just by using PowerBI Performance Analyzer you will be able to see where the problem is, and then do more accurate search about it.
You need to search for those keywords;
Native query in power query: Some M language functions can directly be translated to SQL, so that all transformation happens in sql server side.
Aggregated tables in model view: aggregated views can be added for specific needs of visuals. Ex: if a visual has product category and amount as value you can connect aggregated sql table to the original one so that visual can pick up the value faster.
Hybrid tables: import mode and DQ mode can be used together. so you can use DQ for daily data and import mode for older data together.