I have the below chart
that is sorted by the count for each of the bins. I try to sort it by days open starting from 0 - 5 days, 5 - 10 days etc...
I have added another table that has IDs for each of the bins (0 - 5 days is 1, 5 - 10 days is 2) but I am unable to use it to sort it.
Any ideas?
I do it always by adding dimension table for sorting purpose. Dim table for bins would look like this:
Then go to Data pane and set it up as shown in the picture below.
Select Bin name column
Choose Modeling from menu
Sort by column and here choose column Bin order
Then connect the Dim table to fact table:
While making visual choose Bin name from Dim Table not Fact Table!
Then the final thing is set up sorting in visual:
Here you have Dim and Fact table to reproduce exercise.
Dim Table:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WcsrMUzDQNVXSUTJUitWB8E11DQ2AAkZwAUMDXSOQiDFcxAioByRigtBkoGsIVmSK0GZkoA0UMFOKjQUA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [#"Bin name" = _t, #"Bin order" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Bin name", type text}, {"Bin order", Int64.Type}})
in
#"Changed Type"
Fact Table:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("jdAxDoAgDAXQq5iu0qQtVmX1GoQDuHj/0SoJCWVh5KefR8kZrvtZCBUCMJRQz4pMFkgLmFC+ZGuJWKefUUL+h6zbekJrd3OV1EvHhBSnJHLU6ak0QaWRil5SBw2/pwPEMzvtHrLnlRc=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [#"Bin name" = _t, Frequency = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Bin name", type text}, {"Frequency", Int64.Type}})
in
#"Changed Type"
You should be able to do a Sort by Column under the Modeling tab where you sort your bin name column by the ID value column.
You need to :
Connect the bins (0-5),(5-10) columns from the two tables in your relationships.
In your second table, add a column called order: 1,2,3 for the bins (0-5), 5-10 respectively and so on
This should work
Related
How to make a Power Query join of two tables on least difference between columns. I mean absolute difference between numbers.
I followed this great article: https://exceed.hr/blog/merging-with-date-range-using-power-query/
I tried adding this custom column analogously, where L stands for Tab1 and R for Tab2:
= Table.AddColumn(
Source,
"LeastAbsDifference",
(L) =>
Table.SelectRows( Tab2,
(R) => L[category] = R[category] and Number.Abs(L[target] - R[actual]) )
)
It produces error:
Expression.Error: We cannot convert the value 4 to type Logical.
Tables to recreate example:
// Tab1
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclTSUTJVitWJVnKCs5whrFgA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [category = _t, target = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"target", Int64.Type}})
in
#"Changed Type"
// Tab2
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclTSUTJUitWBsIzgLGMwywnIMoGzTOEsM6XYWAA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [category = _t, actual = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"actual", Int64.Type}})
in
#"Changed Type"
Here's one way:
Append the two tables
Group by Category
Output the desired columns as a Group aggregation
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclTSUTJVitWJVnKCs5whrFgA", BinaryEncoding.Base64),
Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [category = _t, target = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"target", Int64.Type}}),
Source2 = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclTSUTJUitWBsIzgLGMwywnIMoGzTOEsM6XYWAA=",
BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [category = _t, actual = _t]),
#"Changed Type1" = Table.TransformColumnTypes(Source2,{{"actual", Int64.Type}}),
//append the tables
append = Table.Combine({#"Changed Type",#"Changed Type1"}),
//Group by category, then output the desired columns
#"Grouped Rows" = Table.Group(append, {"category"}, {
{"target", each [target]{0},Int64.Type},
{"actual", (t)=> t[actual]{
List.PositionOf(List.Transform(t[actual], each Number.Abs(t[target]{0} - _)),
List.Min(List.Transform(t[actual], each Number.Abs(t[target]{0} - _))),Occurrence.First)},Int64.Type},
{"least difference", (t)=> List.Min(List.Transform(t[actual], each Number.Abs(t[target]{0} - _))),Int64.Type
}})
in
#"Grouped Rows"
Output from above code
I would like to acknowledge a favor of Kristian Rados, who provided the answer to my question in the comments of his article: Merging with date range using Power Query With my gratitude, and by courtesy of the author, I am quoting the answer in full:
The reason your formula produces an error is the second argument of the Table.SelectRows function. In it, you need to filter the table with the boolean (true/false) expression. In your case, the part of the code with Number.Abs function returns a number instead of true/false (e.g. L[target] – R[actual] = 5-1=4 ). Trying to filter the table this way could be possible, but it would require you to use multiple nested environments, which would result in a very complicated formula and slow performance.
I would suggest trying a different approach. By using your example from stack overflow I reproduced the problem. Below is a complete M code I came up with along with the explanation below:
let
Source = Tab1,
#"Merged Queries" = Table.NestedJoin(Source, {"category"}, Tab2, {"category"}, "Tab2", JoinKind.LeftOuter),
#"Expanded Tab2" = Table.ExpandTableColumn(#"Merged Queries", "Tab2", {"actual"}, {"actual"}),
#"Inserted Subtraction" = Table.AddColumn(#"Expanded Tab2", "Least difference", each Number.Abs([target] - [actual]), Int64.Type),
#"Grouped Rows" = Table.Group(#"Inserted Subtraction", {"category"}, {{"All", each Table.First(Table.Sort(_, {{"Least difference", Order.Ascending}}))}}),
#"Expanded All" = Table.ExpandRecordColumn(#"Grouped Rows", "All", {"target", "actual", "Least difference"}, {"target", "actual", "Least difference"})
in
#"Expanded All"
First, we merged the queries by using the category column. After expanding the table, we subtracted two columns to get the absolute difference between target and actual. Finally, we group by category and sort the table by the Least difference column in ascending order (Table.Sort function inside the grouped rows). After this, we take the first row of the nested table (Table.First function), and finally expand the record column.
The input data from my excel source is shown here:
When I am creating a table report in Power BI, this how the data set is showing up.
How do I set up the linkage between the Name and Skills columns so that my Power BI report can resemble the input source?
Ideally, you want to fill in those blank rows so that your data table looks like this
Luckily, you can do this fairly easily in the query editor by replacing empty string rows with null and then using Transform > Fill Down to duplicate the last non-null value.
Here's what the full M code might look like:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8spPVdJRcgwPVorViVYCMatKi1JhHHfPEDDTN7ECyAsO9IFJBOfll6flJGbDVYakFiWmJJYkgvkBiaU5IKOCw+EGOQcoxcYCAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Name = _t, Skill = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Name", type text}, {"Skill", type text}}),
#"Replaced Value" = Table.ReplaceValue(#"Changed Type","",null,Replacer.ReplaceValue,{"Name"}),
#"Filled Down" = Table.FillDown(#"Replaced Value",{"Name"})
in
#"Filled Down"
Once you have all your rows properly filled, then you can make a matrix visual the way you'd like. (You'll need to put both columns in the Row field and turn off Stepped layout in the Format pane under Row headers.)
I have data as such:
I want to split the data in the following format based on date. So going forward the dates can be split and I can use slicer to select the range and get the index value over each selected dates.
Please help on this.
Your pivoted result does not make sense. The first row for example, combines rows #1 and #8 from your table, but there is a prefix 17/22/38 in row #8 which makes the URL different than the one from row #1. Where did this prefix go? How it disappeared and why? And this is also for the other rows, e.g. Contact-us.
But otherwise, Pivot columns is what you need. If your original table looks like this:
Select Date column and click Pivot column command in the ribbon, and you will get a dialog like this:
Select Index as values column. In you case probably it makes no sense to aggregate indexes, so select Don't Aggregate. This will give you a result like this:
Which is as close to your desired result, as possible (considering the issue that I mentioned above).
And here is the M code to reproduce the steps above:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("hdE9CoAwDIDRq0jmUpL0z55FOujsJAge36CQycSl3/IgJF0WoB6xR0bqEGDdLnnPY59ISjCCCViaPJCk7IEszR4o0uKBKm0eaNL6AMZPMOsIA3Rd0wCEfzOIdFFLsN7KEkn/wxL5Pca4AQ==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Date = _t, Search_Term = _t, Url = _t, Index = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Date", type date}, {"Search_Term", type text}, {"Url", type text}, {"Index", Int64.Type}}),
#"Pivoted Column" = Table.Pivot(Table.TransformColumnTypes(#"Changed Type", {{"Date", type text}}, "en-US"), List.Distinct(Table.TransformColumnTypes(#"Changed Type", {{"Date", type text}}, "en-US")[Date]), "Date", "Index")
in
#"Pivoted Column"
I created this matrix and I set its value on rows to have this visual :
And I want to change the background color of each row according to my customer canvas like this:
Is it possible to do that ? bacause I tried formatting condition but it didn't work
Of course it is possible. Add a column with color id to your source table. Say, 1, 2, 3, 4 will be respectively green, yellow, red, blue. Be sure that the color id is a number.
Display your data with table visual. Then, add conditional formatting to each column.
Set it up in this way:
Fields 1-4 enter exactly as they are shown above. I think these settings are self explanatory. Fields 5-6 are up to you. Repeat the row conditional formatting for every column. And you are done:
Here is a sample code to reproduce the source table:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUUosKMhJVYrViVYyAvKSEvOAEMw1BnKTM1KLiirBXBMgNyW1PAkiEAsA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Column1", Int64.Type}, {"Column2", type text}}),
#"Renamed Columns" = Table.RenameColumns(#"Changed Type",{{"Column1", "ColorID"}, {"Column2", "Category"}})
in
#"Renamed Columns"
I am attempting to load several Excel files into Power BI. These files are pretty small (<= ~1k rows). One of these sources must be cleaned up. In particular, one of its columns has some bad data. The correct data is stored in another Excel file. For example:
table bad:
ID col1
1 0
2 0.5
3 2
4 -3
table correct:
ID colx
2 1
4 5
desired output:
ID col1
1 0
2 1
3 2
4 5
In SQL or other data visualization tools, I would left join the bad table to the clean table and then coalesce the bad values and correct values. I know that I have some options on how to implement this in Power BI. I think one option is to implement it in the query editor (i.e., M). I think another option is to implement it in the data model (i.e., DAX). Which option is best? And, what would the implementation look like (e.g., if M, then what does the query look like)?
While you can do this in DAX, I'd suggest doing it in the query editor. The steps would look roughly like this:
Merge the Correct table into the Bad table using a left outer join on the ID columns.
Expand out the Correct table to just get the Colx column.
Create a custom column to pick the values you want. (Add Column > Custom Column)
if [Colx] = null then [Col1] else [Colx]
You can remove the Col1 and Colx column if you want or just keep them. If you delete Col1, you can rename the Col2 column to be Col1.
If you don't want the source tables floating around, you can do all of the above in a single query similar to this:
let
BadSource = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUTJQitWJVjICsfRMwWxjINsIzDIBsnSNlWJjAQ==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [ID = _t, Col1 = _t]),
CorrectSource = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlLSUTJUitWJVjIBskyVYmMB", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [ID = _t, Colx = _t]),
Bad = Table.TransformColumnTypes(BadSource,{{"ID", Int64.Type}, {"Col1", type number}}),
Correct = Table.TransformColumnTypes(CorrectSource,{{"ID", Int64.Type}, {"Colx", type number}}),
#"Merged Queries" = Table.NestedJoin(Bad,{"ID"},Correct,{"ID"},"Correct",JoinKind.LeftOuter),
#"Expanded Correct" = Table.ExpandTableColumn(#"Merged Queries", "Correct", {"Colx"}, {"Colx"}),
#"Added Custom" = Table.AddColumn(#"Expanded Correct", "Col2", each if [Colx] = null then [Col1] else [Colx]),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Col1", "Colx"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Columns",{{"Col2", "Col1"}})
in
#"Renamed Columns"