Assuming Power BI dataset has columns with description added in the modelling view, then is there a way to generate data dictionary from Power BI report/dataset?
You can document using Power Query and Excel. Change your .pbix to a .pbit file and then run the following query to extract a full data dictionary. This is not my code and comes courtesy of RacketLuncher on Reddit.
let
Source = fUnzip(File.Contents("C:\Users\Your file.pbit")),
Filter_DataModelSchema = Table.SelectRows(Source, each ([FileName] = "DataModelSchema" and [Attributes]?[Hidden]? <> true)),
JSONFile = Json.Document(Filter_DataModelSchema{0}[Content],1200),
model = JSONFile[model], // Start from here to change whether you want to pick Tables, or Relationships, or other parts of the model.
tables = model[tables], // Here we are picking the tables metadata
#"Converted to Table" = Table.FromList(tables, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"name", "lineageTag", "modifiedTime", "structureModifiedTime", "columns", "partitions", "measures", "annotations"}, {"name", "lineageTag", "modifiedTime", "structureModifiedTime", "columns", "partitions", "measures", "annotations"}),
#"Expanded columns" = Table.ExpandListColumn(#"Expanded Column1", "columns"),
#"Expanded columns1" = Table.ExpandRecordColumn(#"Expanded columns", "columns", {"name", "dataType", "isKey", "sourceColumn", "formatString", "lineageTag", "summarizeBy", "annotations", "sortByColumn"}, {"columns.name", "columns.dataType", "columns.isKey", "columns.sourceColumn", "columns.formatString", "columns.lineageTag", "columns.summarizeBy", "columns.annotations", "columns.sortByColumn"}),
#"Expanded measures" = Table.ExpandListColumn(#"Expanded columns1", "measures"),
#"Expanded measures1" = Table.ExpandRecordColumn(#"Expanded measures", "measures", {"name", "expression", "formatString", "displayFolder", "lineageTag", "annotations"}, {"measures.name", "measures.expression", "measures.formatString", "measures.displayFolder", "measures.lineageTag", "measures.annotations"})
in
#"Expanded measures1"
Related
When trying to autoupdate stock values using Yahoo Finance link from a csv file using this:
Source = Csv.Document(
Web.Contents(
"https://query1.finance.yahoo.com/v7/finance/download/"
& Comp
& "?period1=1022112000&period2="
& LastDate
& "&interval=1d&events=history"
),
[Delimiter = ",", Columns = 7, Encoding = 1252, QuoteStyle = QuoteStyle.None]
Extra info: Comp and lastdate are custom parameters, in which comp fetches all company stock data and lastdate records the most recent stock date.
I keep getting authentication errors everytime I try to access the data using either anonymous or user login. What can I do?
Thanks
Please check what value you provide to the URL string. Probably there is some error.
You can check this example with function:
let
Source = (STOCKNAME, StartDat, EndDat) => let
Source = Csv.Document(Web.Contents("https://query1.finance.yahoo.com/v7/finance/download/"&STOCKNAME&"?period1="&StartDat&"&period2="&EndDat&"&interval=1d&events=history&includeAdjustedClose=true"),[Delimiter=",", Columns=7, Encoding=1252, QuoteStyle=QuoteStyle.None]),
#"Promoted Headers" = Table.PromoteHeaders(Source, [PromoteAllScalars=true]),
#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"Date", type date}, {"Open", type text}, {"High", type text}, {"Low", type text}, {"Close", type text}, {"Adj Close", type text}, {"Volume", Int64.Type}})
in
#"Changed Type"
in
Source
Query:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WCgkPCVLSUTI0M7E0MzY1MjCAcsyNDM2AnFidaCVPvxBnVFEIx8LAwgCkJBYA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [TICKER = _t, Start = _t, End = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"TICKER", type text}}),
#"Invoked Custom Function" = Table.AddColumn(#"Changed Type", "GetStock", each GetStock([TICKER], [Start], [End])),
#"Expanded GetStock" = Table.ExpandTableColumn(#"Invoked Custom Function", "GetStock", {"Date", "Open", "High", "Low", "Close", "Adj Close", "Volume"}, {"GetStock.Date", "GetStock.Open", "GetStock.High", "GetStock.Low", "GetStock.Close", "GetStock.Adj Close", "GetStock.Volume"})
in
#"Expanded GetStock"
In the Above Picture we have Items and NextPageLink. Data is present inside Items and NextPageLink Contains links for next page data.
I am trying to automate a rest-API call with a Json response having several pages.
The idea would be to automatically call the NextPageLink until there is no NextPageLink.
I want to get data into single table.
Here is the link i am using to pull data : https://prices.azure.com/api/retail/prices
Here is last page link : https://prices.azure.com/api/retail/prices?$skip=408500
Please help me
Create function ReadSingle
(offset) =>
let Source =Json.Document(Web.Contents(" https://prices.azure.com/api/retail/prices?$skip="& Number.ToText( offset )))[Items]
in Source
Use a loop to read in data. You could repeat until done, but frankly my computer locked up, so Im doing pages 0-25, by 100's
let Source
= List.Generate(
() => [ Offset = 0, Data = ReadSingle(0) ],
each [Offset] <= 2500,
each [ Data = ReadSingle( [Offset] ),
Offset = [Offset] + 100 ],
each [Data]
),
z=Table.FromColumns(Source),
#"Added Index" = Table.AddIndexColumn(z, "Index", 0, 1, Int64.Type),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Added Index", {"Index"}, "Attribute", "Value"),
#"Removed Columns" = Table.RemoveColumns(#"Unpivoted Other Columns",{"Index", "Attribute"}),
#"Expanded Value" = Table.ExpandRecordColumn(#"Removed Columns", "Value", {"currencyCode", "tierMinimumUnits", "retailPrice", "unitPrice", "armRegionName", "location", "effectiveStartDate", "meterId", "meterName", "productId", "skuId", "productName", "skuName", "serviceName", "serviceId", "serviceFamily", "unitOfMeasure", "type", "isPrimaryMeterRegion", "armSkuName"}, {"currencyCode", "tierMinimumUnits", "retailPrice", "unitPrice", "armRegionName", "location", "effectiveStartDate", "meterId", "meterName", "productId", "skuId", "productName", "skuName", "serviceName", "serviceId", "serviceFamily", "unitOfMeasure", "type", "isPrimaryMeterRegion", "armSkuName"})
in #"Expanded Value"
If you wanted to you could swap to repeat the loop until NextPageLink is a null. See my favorite reference on this use List.Generate to make API Calls in Power Query M
In the following code, I'm getting an error on line 10 ("Expanded Table Column1"):
let
Source = SharePoint.Files("https://microsoft.sharepoint.com/teams/Fake_Folder/", [ApiVersion = 15]),
#"Added Custom" = Table.AddColumn(Source, "find file", each if Text.Contains([Name], "Fake_Name")
then 1 else 0),
#"Filtered Rows" = Table.SelectRows(#"Added Custom", each ([Name] <> null)),
#"Filtered Rows1" = Table.SelectRows(#"Filtered Rows", each ([find file] = 1)),
#"Filtered Hidden Files1" = Table.SelectRows(#"Filtered Rows1", each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File (6)", each #"Transform File (6)"([Content])),
#"Removed Other Columns1" = Table.SelectColumns(#"Invoke Custom Function1", {"Transform File (6)"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other Columns1", "Transform File (6)",
Basically, I'm selecting a file that matches the name "Fake_Name", and then expanding that file. and I'm getting the error:
We cannot convert the value null to type Logical
Prior to the error, the file is being located, it's being filtered, and then when it needs to be expanded as a table, it runs into a null value. How do I fix it when I can't go to the error and the file itself doesn't have the nulls I'm looking for?
Found the issue: "Sample File 6" (another file referred to in this query) had an issue which caused it to be unloaded, which caused the issue seen here.
Still stumbling my way through Power BI.
I have added a new column in just a few of the files in the folder that I had already pulled in and combined but this column is not appearing once refreshed. Despite the endless research, I am at a loss as to how to fix this in advanced editor.
Your assistance would be greatly appreciated.
Code below:
Source = Folder.Files("C:\Users\Sarah\OneDrive\FEEDLOT\APS Files\Original Files"),
#"Filtered Hidden Files1" = Table.SelectRows(Source, each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File", each #"Transform File"([Content])),
#"Renamed Columns1" = Table.RenameColumns(#"Invoke Custom Function1", {"Name", "Source.Name"}),
#"Removed Other Columns1" = Table.SelectColumns(#"Renamed Columns1", {"Source.Name", "Transform File"}),
#"Removed Errors1" = Table.RemoveRowsWithErrors(#"Removed Other Columns1", {"Transform File"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Errors1", "Transform File", Table.ColumnNames(#"Transform File"(#"Sample File"))),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded Table Column1",{{"Source.Name", type text}, {"Tag Number", type text}, {"Electronic ID", type text}, {"NLIS", type any}, {"Date", type datetime}, {"Live Weight (kg)", type number}, {"Draft", type any}, {"Condition Score", type any}, {"Notes", type any}}),
#"Renamed Columns" = Table.RenameColumns(#"Changed Type",{{"Source.Name", "APS Source File Name"}, {"Tag Number", "Visual ID"}}),
#"Split Column by Delimiter" = Table.SplitColumn(#"Renamed Columns", "APS Source File Name", Splitter.SplitTextByEachDelimiter({"-"}, QuoteStyle.Csv, true), {"APS Source File Name.1", "APS Source File Name.2"}),
#"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"APS Source File Name.1", type text}, {"APS Source File Name.2", type text}}),
#"Removed Columns" = Table.RemoveColumns(#"Changed Type1",{"APS Source File Name.2"}),
#"Renamed Columns2" = Table.RenameColumns(#"Removed Columns",{{"APS Source File Name.1", "APS Source File Name"}})
First of all, check twice you are reading the correct file and backup the current script. Then, I would try two approaches:
Delete the steps until you find the one that causes the problem. Then, fix the step causing the issue and check if at the end of the pipeline the columns appear.
If this does not solve the issue, I would go with the second approach:
Create a new PBIX, and one-by-one re-implement all the steps until you find the one causing the new columns to disappear.
Don't try to fix this in Advanced Editor alone. Instead, step through the code and inspect the result after each step. That will show you where the problem is.
Without seeing your data it is not possible to tell you where in your code you need to make changes, but you will be able to see that if you step through the actions one by one.
I'm trying to model a data source in power bi, but I'm not getting it.
Could you help me with alternatives so I can create a new column? The data source is in excel and brings the data with subtotal by types (XPTO, XPT, etc). I want to put these types as corresponding values in the new column for your items. I tried via power query and dax, but I could not.
Original Source:
Modifications Needed
Source File
This assumes that you can identify the Subtotal Rows by checking the position of the first space in the first column:
let
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Added RowType" = Table.AddColumn(Source, "RowType", each Text.PositionOf([#"Centro financ./item orçamento"]," "), Int64.Type),
#"Added Type" = Table.AddColumn(#"Added RowType", "Type", each if [RowType] = 4 then Text.BetweenDelimiters([#"Centro financ./item orçamento"], " ", " ") else null, type text),
#"Filled Down" = Table.FillDown(#"Added Type",{"Type"}),
#"Filtered Rows" = Table.SelectRows(#"Filled Down", each ([RowType] = 8)),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"RowType"}),
#"Reordered Columns" = Table.ReorderColumns(#"Removed Columns",List.Combine({{"Type"}, Table.ColumnNames(Source)}))
in
#"Reordered Columns"
Another solution is to - Include another data source in Power BI, which will look something like this.
Item Type
615x92120 Mat1 XPTA
615x92121 Mat2 XPTA
615x92122 Mat3 XPTU
615x92123 Mat4 XPTU
And then do a Join between your existing table and this table to bring out the Type in your existing table. Once you have done that, you should be able to filter out to blanks or null which will be your delete lines.
Note :- This only works, if you know all the items and corresponding types in advance.