Unable to execute measure stored in Analysis Services/Power BI - powerbi

I'm trying to see if I can write tests in C# that will validate the measures in a model; to do I need to execute the measure and compare the result with an implementation in C# that should be the same.
I've set up a basic Power BI-report containing a couple of tables and measures. One of the measures is a simple count('fact_data'[Item]), and works as expected when I'm looking at it in the report.
When connecting to the model via ADOMD.NET I'm able to extract the measure expression programmatically, but when I try to run it via the AdomdDataAdapter all I get is
AdomdErrorResponseException: Query (1, 18) The syntax for '[Item]' is incorrect. (COUNT('fact_data'[Item])).
Other and more basic queries where I simply return table contents work without problem.
No matter what I try I cannot get this to execute; if I rename the column or table to something that doesn't exist, I get an error about this so it would seem that the query is executed, but that something is either wrong or missing in the syntax.

I still don't know how to do this via ExecuteScalar(), but wrapping the DAX in a ROW() seems to work for stored measures.

Related

Does the syntax in a Power BI join cause a data refresh?

I'm trying to make a Power BI report that someone else created run faster and as I'm going through the queries I've noticed some of the merged queries have different syntax and I'm wondering if the different syntax is causing a data refresh to occur during the merge.
Below are 2 different merged queries, but one has the # sign before the table name with the table name in quotes and the other does not. What is the significance of not having the # sign?
It's the #"Org_Roll-Up" vs Account_Groups.
Syntax 1
= Table.NestedJoin(#"Changed Type9", {"COMPANY"}, #"Org_Roll-Up", {"ORG"}, "Org_Roll-Up", JoinKind.LeftOuter)
Syntax 2
= Table.NestedJoin(#"Removed Columns", {"ACCOUNT"}, Account_Groups, {"ACCOUNT"}, "Account_Groups", JoinKind.LeftOuter)
I'm trying to get the queries to run once and then send the data to other queries as needed instead of refreshing each time. I have parallel turned of and background data refresh off.
The # syntax makes zero difference. Variable names require a # when they have spaces or special characters in them otherwise they're not required. See here for more details.
https://bengribaudo.com/blog/2018/01/19/4321/power-query-m-primer-part4-variables-identifiers
The way you’ve worded this, I just want to make sure you understand how Power Query does a merge. When you merge query A with query B, Power Query will run query B again for use by query A. It doesn’t pull previously loaded data by query B or table B from the data model (nor does it change table B). It will run the query B again and join it with query A and load the result to table A. So in syntax 1, #"Org_Roll-Up" will re-run, and in syntax 2, Account_Groups will re-run. Depending on what the query does and how many rows are on the table, you can have quite different performance through small changes. See Chris Webb’s 3 part series here for more ideas: https://blog.crossjoin.co.uk/2020/05/31/optimising-the-performance-of-power-query-merges-in-power-bi-part-1/

What is difference between edit performed in query edit vs during modelling?

When I get data into Power BI I can edit the query as well as perform edit to the model.
What is difference between edit performed in query edit vs during modelling?
When you edit the query, you use Power Query, with its own Query Editor user interface. The steps you apply are recorded in the "M" language. Use Power Query to extract, transform, and finally load data into the Data Model.
Once the data is in the Data Model, you use DAX to create measures that you use in visuals. You can also use DAX to add more columns or even tables to the data model.
Whether to use Power Query or DAX to add columns or tables to the data model depends on a variety of factors. Some things are dead easy to do in Power Query, but harder to achieve with DAX, and vice versa. If you create a column with a formula that depends on a DAX measure, then you can only do that with DAX, because Power Query is not aware of the measures that are created after the load into the data model.
Power Query is very powerful, but the M code syntax is very different to the Excel formula syntax, or the VBA macro language. Learning to write advanced M code can be quite challenging.
DAX, on the other hand, behaves very similar to Excel formulas. Many Excel functions can even be used in DAX verbatim. If you know Excel, you've already got a head start on DAX and you can ease your way into it by learning additional functions and then expanding into more complex formulas.
The latter is probably the reason why many data manipulations are done in DAX, even though they could as well have been done in Power Query.
There are also some efficiencies with data storage and performance. Power Query makes use of query folding with SQL queries, for example, where its transformations are actually performed at the data source, i.e. on the SQL server side, and not in desktop client, and only the final query result is transferred to the desktop client.
Edit after comment: When the data is loaded into the data model, an algorithm processes the data and sorts it in a way that is most efficient for maximum compression and minimum storage. I don't have any concreate examples, but adding a column in Power Query will result in a smaller footprint than adding the same column with DAX. Read more about the compression algorithm VertiPaq here: https://towardsdatascience.com/inside-vertipaq-in-power-bi-compress-for-success-68b888d9d463
But apart from that, it mainly comes down to personal preference based on skill and experience.
By the way, many of your questions can be answered by reading through the Microsoft documentation, e.g. https://learn.microsoft.com/en-us/power-bi/guidance/import-modeling-data-reduction

PowerBI Query Performance

I have a PowerBI report that has a few different pages display different visuals. The report uses the same table of data (lets call it Jobs).
The previous author of this report has created two queries in the data section that read off this base table of data, but apply different transformations and filters to the underlying data. Then, the visuals use either of these models to display their data. For example, the first one applies a filter to exclude certain columns based off a status field and the other applies a different filter, and performs transformations on some of the columns
When I manually refresh the report, it looks like the report is retrieving data for both of these queries, even though the base data is the same. Since the dataset is quite large, I am worried that this report has been built inefficiently but I am not sure if there is a better way of doing this.
TL;DR; The Source and Navigation of both of queries is exactly the same - is this retrieving the data twice and causing my report to be inefficient, and if so, what is the approrpiate way to achieve what I am trying to do?
PowerBi will try to parallelize as much as possible. If you have two queries that read from the same table then two queries will be executed.
To avoid this you can:
create a query which only gets the necessary data from the table.
Set this table not to be loaded in the model (toggle "Enable Load")
Every other table that starts from this table won't be a clone of this but will reference it.
In this way, the data will be fetched once from the source and then used to create other tables using PowerQuery.

Creating a measure using DAX function Left on a table from Azure Analysis service

I am trying to get the first 4 digits from a string from a table in Power BI. The connection is a live connection / Direct which does not allow me to edit the query. Also, I am unable to create a new column. So I have to stick with creating a new Measure.
Now, I am using the following formula to get what I need.
LocationCd = mid(vw_DW_Contracts[ContractNumber],1,5)
but, this is not working a the vw_DW_Contracts table cannot be used in a measure. Is there a workaround to such problem?
I do not have access to the analysis service so cannot make any modifications in the source.
Please help.
Thanks
but, this is not working a the vw_DW_Contracts table cannot be used in a measure.
I'm not sure what you mean by this, but I'm guessing the message you see is telling you that measures expect an aggregation. The formula you posted would be great as a calculated column where it can be evaluated row by row. Measures are aggregations over multiple rows.
If you are trying to make a new field that is the location code that can be used in visuals on a categorical axis, this should be a column rather than a measure. You could write a measure to show a location cd using something like LASTNONBLANK (mid(vw_DW_Contracts[ContractNumber],1,5), 1) but I doubt that is what you want.

Power Query Formula Language - Detect type of columns

In Power BI, I've got some query tables generated from imported data. All the data comes in as type 'Any', and I'm trying to automatically detect the type of the data in each column.
Some of the queries generate tables with columns based on the in-coming data - I don't know what the columns are going to be until the query runs and sets up the table (data comes from an Azure blob). As I will have quite a few tables to maintain, which columns can change (possibly new columns being added) with any data refresh, it would be unmanageable to go through all of them each time and press 'Detect Data Type' on the columns.
So I'm trying to figure out how I can do a 'Detect Data Type' in the query formula language to attach to the end of the query that generates the table columns. I've tried grabbing the first entry in a column and do Value.Type(column{0}), however this seems to come out as 'Text' for a column which has integers in it. Pressing 'Detect Data Type' does however correctly identifies the type as 'Whole Number'.
Does anyone know how to detect a column's entry types?
P.S. I'm not too worried about a column possibly holding values of different data types
You seem to have multiple issues here. And your solution will be fragile, there's a better way. But let's first deal with column type detection. Power Query uses the 'any' data type as it's go to data type. You can write a function that samples the rows of a column in a table does a best match data type detection then explicitly sets the data type of the column. This is probably messy and tricky since you need to do it once per column. This might be workable for a fixed schema but for a dynamic schema you'll run into a couple of things very quickly. First you'll need to write some crazy PQ code to list all the columns and run you function on each. This will work the first time, but might break in subsequent refreshes because data model changes are not allowed during refresh. If you're using a tool like Power BI Desktop, you'll be able to fix things up. If you publish your report to the Power BI service, you'll just see refresh errors.
Dynamic Schemas will suffer the same data model change issue I mentioned above.
The alternate solution that you won't have problems with is using a Direct Query data source instead of using Power Query. If you load your data into Azure SQL or a Tabular Model, the reporting layer will get the updated fields automatically so you don't have to try to work around using PQ.