I have data from an external source that is downloaded in csv format. This data shows the interactions from several users and doesn't have an id column. The problem I'm having is that I'm not able to use index because multiple entries represent interactions and processes. The interactions would be the group of processes a specific user do and the process represents each actions taken in a specific interaction. Any user could repeat the same interaction at any time of day. The data looks likes this:
User1 has 2 processes but there were 3 interactions. How can I assign an ID for each interaction having into consideration that there might be multiple processes for a single user in the same day. I tried grouping them in Power Query but it groups the overall processes and I'm not able to distinguish the number of interactions. Is it better to do it in Dax?
Edit:
I notice that it is hard to understand what I need but I think this would be a better way to see it:
Process 2 are the steps done in an interaction. Like in the column in yellow I need to add an ID taking in to consideration where an interaction start and where it ends.
I'm not exactly sure I follow what you describe. It looks to me like user1 has 4 interactions--Processes AA, AB, BA, and BB--but you say 3.
Still, I decided to take a shot at providing an answer anyway. I started with a CSV file set up like you show.
Then brought the CSV into Power Query and, just to add a future point of reference so that you could follow the Id assignments better, I added an index column that I called startingIndex.
Then I added a custom column combining the processes that I understand actually define an interaction.
Then I grouped everything by users and Interactions into a column named allData.
Then I added a custom column to copy the column that was created from the earlier grouping, to sort the tables within it, and to add an index to each table within it. This essentially indexed each user's interaction group. (Because all of your interactions occur on the same date(s), the sorting doesn't help much. But I did it to show where you could do it if you included datetime info instead of just a date.)
Then I added a custom column to copy the column that was created earlier to add the interactions index, and to add an Id item within each table within it. I constructed each Id by combining the user, interactions, and interactionIndex for each.
Then I selected the latest column I had created (complexId) and removed all other columns.
Last, I expanded all tables without including the Interactions and Index columns. (The Index column was the index used for the interactions within the groups and no longer needed.) I included the startingIndex column just so you could see where items originally were at the start, in comparison to their final Id.
Given your new example, to create the Interaction ID you show, you only need the first two columns of the table. If not part of the original data, you can easily generate the third column (Process2))
It appears you want to increment the interaction ID whenever the Process changes
Please read the comments in the M code and explore the Applied Steps to better understand the algorithm:
M Code
let
//be sure to change table name in next row to your real table name
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{
{"User", type text},
{"Process", type text},
{"Process2", type text}
}),
//add an index column
idx = Table.AddIndexColumn(#"Changed Type", "Index", 0, 1, Int64.Type),
//Custom column returns the Index if
// the current Index is 0 (first row) or
// there has been no change in user or process comparing current/previous row
// else return null
#"Added Custom" = Table.AddColumn(idx, "Custom",
each if [Index]=0
then 0
else
if [Process] <> idx[Process]{[Index]-1} or [User] <> idx[User]{[Index]-1} then [Index]
else null),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Index"}),
//Fill down the custom column
// now have same number for each interactive group
#"Filled Down" = Table.FillDown(#"Removed Columns",{"Custom"}),
//Group by the "filled down" custom column with no aggregation
#"Grouped Rows" = Table.Group(#"Filled Down", {"Custom"}, {
{"all", each _, type table [User=nullable text, Process=nullable text, Process2=nullable text, Custom=number]}
}),
//add a one-based Index column to the grouped table
#"Added Index" = Table.AddIndexColumn(#"Grouped Rows", "Interaction ID", 1, 1, Int64.Type),
#"Removed Columns1" = Table.RemoveColumns(#"Added Index",{"Custom"}),
//Re-expand the table
#"Expanded all" = Table.ExpandTableColumn(#"Removed Columns1", "all",
{"User", "Process", "Process2"}, {"User", "Process", "Process2"})
in
#"Expanded all"
Source
Results
Related
I imported a table from SQL which is more than 100k rows. i was connecting it to data model
but then i found that it has some duplicate rows. i identified those duplicate rows which are
they repeating twice i want to keep first and delete whole second copy of them. i tried for hours in query editor adding custom column but it just would let me add new column condition as data view in power bi did.
Hope, you'll get a better solution form SO contributors, if not you can try like this.
let
Source=...
filtered =
Table.RemoveMatchingRows(
Source
,{[Keycode= "qwe123", Desc="rty456"]}
,"Desc"
)
in
filtered
Edited: Group, and find the first occurrence, like here
#"Grouped Rows" = Table.Group(#"PriorStepNameHere", {"TestColumn1","TestColumn2"}, {{"data", each Table.RemoveColumns(Table.AddColumn(Table.AddIndexColumn(_, "Index", 0, 1, Int64.Type), "First Instance?", each if [Index]=0 then 1 else 0),{"Index"}), type table }}),
then expand
I have a table that's generated when I pull data from an accounting software - the example columns are months/years in the format as follows (It pulls all the way to current day, and the last month will be partial month data):
Nov_2020
Dec_2020
Jan_2021
Feb_1_10_2021 (Current month, column to remove)
... So on and so forth.
My goal I have been trying to figure out is how to use the power query editor to remove the last column (The partial month) - I tried messing around with the text length to no avail (The goal being to remove anything with text length >8, so the full months data would show but the last month shouldn't). I can't just remove based on a text filter, because if someone were to pull the data 1 year from now it would have to account for 2021/2022.
Is this possible to do in PQ? Sorry, I'm new to it so if I need to elaborate more I can.. Thanks!
You can do this with Table.SelectColumns where you use List.Select on the Table.ColumnNames.
= Table.SelectColumns(
PrevStep,
List.Select(Table.ColumnNames(PrevStep), each Text.Length(_) <= 8)
)
Although both Alexis Olson's and Justyna MK's answers are valid, there is another approach. Since it appears that you're getting data for each month in a separate column, what you will surely want to do is unpivot your data, that is transform those columns into rows. It's the only sensible way to get a good material for analysis, therefore, I would suggest to unpivot the columns first, then simply filter out rows containing the last month.
To make it dynamic, I would use unpivot other columns option - you select columns and it will transform remaining columns into row in such a way that two columns will be created - one that will contain column names in rows and the other one will contain values.
To illustrate what I mean by unpivoting, when you have data like this:
You're automatically transforming that into this:
You can try to do it through Power Query's Advanced Editor. Assign the name of the last column to LastColumn variable and then use it in the last step (Removed Columns).
let
Source = Excel.Workbook(File.Contents(Excel file path), null, true),
tblPQ_Table = Source{[Item="tblPQ",Kind="Table"]}[Data],
#"Changed Type" = Table.TransformColumnTypes(tblPQ_Table,{{"Nov_2020", Int64.Type}, {"Dec_2020", Int64.Type}, {"Jan_2021", Int64.Type}, {"Feb_1_10_2021", Int64.Type}}),
LastColumn = List.Last(Table.ColumnNames(#"Changed Type")),
#"Removed Columns" = Table.RemoveColumns(#"Changed Type",{LastColumn})
in
#"Removed Columns"
The picture I have attached shows what my power query table looks like (exactly the same as source file) and then underneath what I would like the final end product to look like.
Correct me if I'm wrong but I thought the purpose of power query/power bi was to not manipulate the source file but do this in power query/power bi?
If that's the case, how can I enter new columns and data to the existing table below?
You can add custom columns without manipulating source file in power bi. Please refer to below link.
https://learn.microsoft.com/en-us/power-bi/desktop-add-custom-column
EDIT: Based on your comment editing my answer - Not sure if this helps.
Click on edit queries after loading source file to power bi.
Using 'Enter Data' button entered sample data you provided and created new table. Data can be copy pasted from excel. You can enter new rows manually. Using Tag number column to keep reference.
Merge Queries - Once the above table is created merged it with original table on tag number column.
Expand Table - In the original table expand the merged table. Uncheck tag number(as it is already present) and uncheck use original column name as prefix.
Now the table will look like the way you wanted it.
You can always change data(add new columns/rows) manually in new table by clicking on gear button next to source.
Here is the closest solution to what I found from "manual data entry" letting you as much freedom as you would like to add rows of data, if the columns that you want to create do not follow a specific pattern.
I used an example for the column "Mob". I have not exactly reproduced the content of your cells but I hope that this will not be an issue to understand the logic.
Here is the data I am starting with:
Here is the Power Query in which I "manually" add a row:
#"Added Conditional Column" = Table.AddColumn(#"Changed Type", "Mob", each if [Tag Number] = "v" then null else null),
NewRows = Table.InsertRows(#"Added Conditional Column", 2, {[Mob="15-OHIO", Tag Number="4353654", Electronic ID=1.5, NLIS="", Date="31/05/2015", Live Weight="6", Draft="", Condition store="", Weighing Type="WEAN"]})
in
NewRows
1) I first created a column with only null values:
#"Added Conditional Column" = Table.AddColumn(#"Changed Type", "Mob", each if [Tag Number] = "v" then null else null),
2) With the "Table.InsertRows" function:
I indicated the specific line: 2, (knowing that power Bi start counting at zero, at the "headers" so it will the third line in the file)
I indicated the column at which I wanted to insert the value, i.e "Mob"
I indicated the value that all other other rows should have:
NewRows = Table.InsertRows(#"Added Conditional Column", 2, {[Mob="15-OHIO", Tag Number="4353654", Electronic ID=1.5, NLIS="", Date="31/05/2015", Live Weight="6", Draft="", Condition store="", Weighing Type="WEAN"]})
Here is the result:
I hope this helps.
You can apply this logic for all the other rows.
I do not think that this is very scalable however, becaue you have to indicate each time the values of the rows in the other columns as well. There might be a better option.
I'm trying to create a lookup table combining the my 3 source files primary keys columns, this way I won't have to do an outer join to find the missing records from each source and then append them together. I've found how to "combine" two source files but I can't figure out how to drill into the columns/fields lists so that I can select only Column 1 (or "Item Code" header name in the Excel files).
Here is the code I have so far to combine 2/3 files (as a trial):
let
Source = Table.Combine({Excel.Workbook(File.Contents("C:\Users\Desktop\Dry Good Demad-Supply Report\MRP_ParentDmd\Data_Sources\JDE_MRP_Dmd.xlsx"), null, true),
Excel.Workbook(File.Contents("C:\Users\Desktop\Dry Good Demad-Supply Report\MRP_ParentDmd\Data_Sources\JDE_Open_PO.xlsx"), null, true)})
in Source
If you've got a less than ideal data source (ie lots of irrelevant columns, duplicates in the data you want) then one way to avoid materialising a whole bunch of unnecessary data would be to perform all your transformations/filtering on nested table cells rather than loading all the data up just to remove columns/dupes.
The M code below should be a rough start that hopefully gets you on the way
let
//Adjust the Source step to refer to the relevant folder your 3 source files are saved in
Source = Folder.Files("CC:\Users\Desktop\Dry Good Demad-Supply Report\MRP_ParentDmd\Data_Sources"),
//Filter the file list to leave just your 3 source files if required
#"Filtered Rows" = Table.SelectRows(Source, each ([Extension] = ".xlsx")),
//Remove all columns excep the Binary file column
#"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows",{"Content"}),
//Convert the binary file to the file data ie sheets, tables, named ranges etc - the same data you get when you use a file as a source
#"Workbook Data" = Table.TransformColumns(#"Removed Other Columns",{"Content", each Excel.Workbook(_)}),
//Filter the nested file data table cell to select the sheet you need from your source files - may not be necessary depending on what's in the files
#"Sheet Filter" = Table.TransformColumns(#"Workbook Data",{"Content", each Table.SelectRows(_, each [Name] = "Sheet1")}),
//Step to Name the column you want to extract data from
#"Column Name" = "Column1",
//Extract a List of the values in the specified column
#"Column Values" = Table.TransformColumns(#"Sheet Filter",{"Content", each List.Distinct(Table.Column(_{0}[Data],#"Column Name"))}),
//Expand all the lists
#"Expanded Content" = Table.ExpandListColumn(#"Column Values", "Content"),
#"Removed Duplicates" = Table.Distinct(#"Expanded Content")
in
#"Removed Duplicates"
EDIT
To select multiple columns and provide the distinct rows you could change the steps starting from the #"Column Name"
This could end up taking a fair bit longer than the previous step depending on how much data you have, but it should do the job
//Step to Name the column you want to extract data from
#"Column Name" = {"Column1","Column2","Column5"},
//Extract a List of the values in the specified column
#"Column Values" = Table.TransformColumns(#"Sheet Filter",{"Content", each Table.SelectColumns(_{0}[Data],#"Column Name")}),
//In each nested table, filter down to distinct rows
#"Distinct rows in Nested Tables" = Table.TransformColumns(#"Column Values",{"Content", each Table.Distinct(_)}),
//Expand nested table column
#"Expanded Content" = Table.ExpandTableColumn(#"Distinct rows in Nested Tables", "Content", #"Column Name"),
//Remove Duplicates in combined table
#"Removed Duplicates" = Table.Distinct(#"Expanded Content")
in
#"Removed Duplicates"
If you're starting out with Power Query, don't try to write your code manually and don't cram everything into one statement. Rather, use the ribbon commands and then edit the code if required.
For your scenario, you could create a separate query for each data source. Load these as connections only. Shape each data source to contain the columns you need. Then you can append the three data queries and further refine the result.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have two different file shows in below table, one is bugtracker and another is bugtracker (2)
Now I want to compare two statuses.
If the status is different, then count it
If all you're really asking for is a True or False comparison as to whether the 'Assigned User Name' and 'Status' of one table's record equals the 'Assigned User Name' and 'Status' of the other table's matching record, then using DAX's if should work.
Assuming you've already matched and merged your "BugTracker" and "BugTracker (2)" table's records in order to get the table you have shown above, and the merged table's name is "BugTrackerMerged", you could just add a column with this DAX command:
Column = if(BugTrackerMerged[Status]=BugTrackerMerged[Status2],TRUE(),FALSE())
Note that I named the second status column 'Status2', instead of 'Status'. Both status columns cannot have the same name.
If you haven't already merged the table's records, you'll need to do that first. I find it easiest to do that with Power Query (Power BI's Edit Queries feature).
(I apologize up front if the following is too detailed. Not knowing your level of Power Query expertise, I figured I'd simplify discussion via step-by-step tutorial. It's more straightforward than it "looks".)
In order to merge the two tables ("BugTracker" and "BugTracker (2)"), you'll need a common keyfield for matching and merging. For this situation, I assume your first record in "BugTracker" should match and merge with the first record of "BugTracker (2)", your second record in "BugTracker" should match and merge with the second record of "BugTracker (2)", and so on. Therefore, just add an index to each table.
For BugTracker, in Power Query select the "BugTracker" query:
Then click the "Add Column" tab, and then "Index Column". (That will add the index to the "BugTracker" table.)
Do the same for "BugTracker (2)".
With common indexes for both "BugTracker" and "BugTracker (2)" you can match and merge the two tables. Click the "Home" tab, then the drop-down arrow beside "Merge Queries", then "Merge Queries as New".
In the window that pops up, make the selections necessary so it looks like this and click "OK":
This creates a new query, likely called "Merge". At this point, I renamed that query to "BugTrackerMerged".
If you select that new query (now named "BugTrackerMerged") and click on "Source", under "Applied Steps"...
You'll see this code in the formula bar:
= Table.NestedJoin(BugTracker,{"Index"},#"BugTracker (2)",{"Index"},"NewColumn",JoinKind.FullOuter)
In that code, change "NewColumn" to "BugTracker (2)" to rename the column that is generated. (You could rename it as a separate step if your prefer, but I thought this approach was "cleaner".
Then click the button, to the right of the "BugTracker (2)" column's title...
...to expand the tables in the column. You'll see a pop-up window like this:
Leaving the settings like shown here will expand (bring in) all the columns from the secondary table of the earlier merge. (That secondary table was "BugTracker (2)".) Using the original column name as prefix will help you keep straight which "Status" and "Assigned User Name" info comes from which table.
At this point, you have the merged info. You could go one step further here and do the True/False comparison here too as well, if you like. To do that, just add a new custom column with some code: click the "Add Column" tab, and the "Custom Column" button:
Then, in the pop-up window, add this code:
if [Status]&[Assigned User Name]=[#"BugTracker (2).Status"]&[#"BugTracker (2).Assigned User Name"] then "True" else "False"
Like this:
You'll get a table like this:
Your data has a lot of "Trues" up front. You can easily see that there are also "Falses" though, by using the column's filter button.
Here's my Power Query (M) code for my three queries:
BugTracker:
let
Source = Excel.Workbook(File.Contents("C:\Users\MARC_000\Desktop\sample\Rowdata Programming 15 July 2017 (2).xlsx"), null, true),
BugTracker_Sheet = Source{[Item="BugTracker",Kind="Sheet"]}[Data],
#"Changed Type" = Table.TransformColumnTypes(BugTracker_Sheet,{{"Column1", type text}, {"Column2", type text}}),
#"Promoted Headers" = Table.PromoteHeaders(#"Changed Type", [PromoteAllScalars=true]),
#"Added Index" = Table.AddIndexColumn(#"Promoted Headers", "Index", 0, 1)
in
#"Added Index"
BugTracker (2):
let
Source = Excel.Workbook(File.Contents("C:\Users\MARC_000\Desktop\sample\Rowdata Programming 18 July 2017.xlsx"), null, true),
BugTracker_Sheet = Source{[Item="BugTracker",Kind="Sheet"]}[Data],
#"Changed Type" = Table.TransformColumnTypes(BugTracker_Sheet,{{"Column1", type text}, {"Column2", type text}}),
#"Promoted Headers" = Table.PromoteHeaders(#"Changed Type", [PromoteAllScalars=true]),
#"Added Index" = Table.AddIndexColumn(#"Promoted Headers", "Index", 0, 1)
in
#"Added Index"
BugTrackerMerged:
let
Source = Table.NestedJoin(BugTracker,{"Index"},#"BugTracker (2)",{"Index"},"BugTracker (2)",JoinKind.FullOuter),
#"Expanded BugTracker (2)" = Table.ExpandTableColumn(Source, "BugTracker (2)", {"Status", "Assigned User Name", "Index"}, {"BugTracker (2).Status", "BugTracker (2).Assigned User Name", "BugTracker (2).Index"}),
#"Added Custom" = Table.AddColumn(#"Expanded BugTracker (2)", "Custom", each if [Status]&[Assigned User Name]=[#"BugTracker (2).Status"]&[#"BugTracker (2).Assigned User Name"] then "True" else "False")
in
#"Added Custom"